Dec 2 01:43:57 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Dec 2 01:43:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Dec 2 01:43:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 2 01:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 2 01:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 2 01:43:57 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 2 01:43:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 2 01:43:57 localhost kernel: signal: max sigframe size: 1776 Dec 2 01:43:57 localhost kernel: BIOS-provided physical RAM map: Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 2 01:43:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Dec 2 01:43:57 localhost kernel: NX (Execute Disable) protection: active Dec 2 01:43:57 localhost kernel: SMBIOS 2.8 present. Dec 2 01:43:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 2 01:43:57 localhost kernel: Hypervisor detected: KVM Dec 2 01:43:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 2 01:43:57 localhost kernel: kvm-clock: using sched offset of 1909811483 cycles Dec 2 01:43:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 2 01:43:57 localhost kernel: tsc: Detected 2799.998 MHz processor Dec 2 01:43:57 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Dec 2 01:43:57 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 2 01:43:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Dec 2 01:43:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Dec 2 01:43:57 localhost kernel: Using GB pages for direct mapping Dec 2 01:43:57 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Dec 2 01:43:57 localhost kernel: ACPI: Early table checksum verification disabled Dec 2 01:43:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 2 01:43:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Dec 2 01:43:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Dec 2 01:43:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Dec 2 01:43:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Dec 2 01:43:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Dec 2 01:43:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Dec 2 01:43:57 localhost kernel: No NUMA configuration found Dec 2 01:43:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Dec 2 01:43:57 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Dec 2 01:43:57 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Dec 2 01:43:57 localhost kernel: Zone ranges: Dec 2 01:43:57 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 2 01:43:57 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 2 01:43:57 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 2 01:43:57 localhost kernel: Device empty Dec 2 01:43:57 localhost kernel: Movable zone start for each node Dec 2 01:43:57 localhost kernel: Early memory node ranges Dec 2 01:43:57 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 2 01:43:57 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Dec 2 01:43:57 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Dec 2 01:43:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Dec 2 01:43:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 2 01:43:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 2 01:43:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Dec 2 01:43:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Dec 2 01:43:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 2 01:43:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 2 01:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 2 01:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 2 01:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 2 01:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 2 01:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 2 01:43:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 2 01:43:57 localhost kernel: TSC deadline timer available Dec 2 01:43:57 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Dec 2 01:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Dec 2 01:43:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Dec 2 01:43:57 localhost kernel: Booting paravirtualized kernel on KVM Dec 2 01:43:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 2 01:43:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 2 01:43:57 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Dec 2 01:43:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Dec 2 01:43:57 localhost kernel: Fallback order for Node 0: 0 Dec 2 01:43:57 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Dec 2 01:43:57 localhost kernel: Policy zone: Normal Dec 2 01:43:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Dec 2 01:43:57 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 2 01:43:57 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 2 01:43:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 2 01:43:57 localhost kernel: software IO TLB: area num 8. Dec 2 01:43:57 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Dec 2 01:43:57 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Dec 2 01:43:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 2 01:43:57 localhost kernel: ftrace: allocating 44803 entries in 176 pages Dec 2 01:43:57 localhost kernel: ftrace: allocated 176 pages with 3 groups Dec 2 01:43:57 localhost kernel: Dynamic Preempt: voluntary Dec 2 01:43:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Dec 2 01:43:57 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Dec 2 01:43:57 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Dec 2 01:43:57 localhost kernel: #011Rude variant of Tasks RCU enabled. Dec 2 01:43:57 localhost kernel: #011Tracing variant of Tasks RCU enabled. Dec 2 01:43:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 2 01:43:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 2 01:43:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Dec 2 01:43:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 2 01:43:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Dec 2 01:43:57 localhost kernel: random: crng init done (trusting CPU's manufacturer) Dec 2 01:43:57 localhost kernel: Console: colour VGA+ 80x25 Dec 2 01:43:57 localhost kernel: printk: console [tty0] enabled Dec 2 01:43:57 localhost kernel: printk: console [ttyS0] enabled Dec 2 01:43:57 localhost kernel: ACPI: Core revision 20211217 Dec 2 01:43:57 localhost kernel: APIC: Switch to symmetric I/O mode setup Dec 2 01:43:57 localhost kernel: x2apic enabled Dec 2 01:43:57 localhost kernel: Switched APIC routing to physical x2apic. Dec 2 01:43:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 2 01:43:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 2 01:43:57 localhost kernel: pid_max: default: 32768 minimum: 301 Dec 2 01:43:57 localhost kernel: LSM: Security Framework initializing Dec 2 01:43:57 localhost kernel: Yama: becoming mindful. Dec 2 01:43:57 localhost kernel: SELinux: Initializing. Dec 2 01:43:57 localhost kernel: LSM support for eBPF active Dec 2 01:43:57 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 2 01:43:57 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 2 01:43:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 2 01:43:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 2 01:43:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 2 01:43:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 2 01:43:57 localhost kernel: Spectre V2 : Mitigation: Retpolines Dec 2 01:43:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 2 01:43:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 2 01:43:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 2 01:43:57 localhost kernel: RETBleed: Mitigation: untrained return thunk Dec 2 01:43:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 2 01:43:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 2 01:43:57 localhost kernel: Freeing SMP alternatives memory: 36K Dec 2 01:43:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 2 01:43:57 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Dec 2 01:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 2 01:43:57 localhost kernel: ... version: 0 Dec 2 01:43:57 localhost kernel: ... bit width: 48 Dec 2 01:43:57 localhost kernel: ... generic registers: 6 Dec 2 01:43:57 localhost kernel: ... value mask: 0000ffffffffffff Dec 2 01:43:57 localhost kernel: ... max period: 00007fffffffffff Dec 2 01:43:57 localhost kernel: ... fixed-purpose events: 0 Dec 2 01:43:57 localhost kernel: ... event mask: 000000000000003f Dec 2 01:43:57 localhost kernel: rcu: Hierarchical SRCU implementation. Dec 2 01:43:57 localhost kernel: rcu: #011Max phase no-delay instances is 400. Dec 2 01:43:57 localhost kernel: smp: Bringing up secondary CPUs ... Dec 2 01:43:57 localhost kernel: x86: Booting SMP configuration: Dec 2 01:43:57 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 2 01:43:57 localhost kernel: smp: Brought up 1 node, 8 CPUs Dec 2 01:43:57 localhost kernel: smpboot: Max logical packages: 8 Dec 2 01:43:57 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Dec 2 01:43:57 localhost kernel: node 0 deferred pages initialised in 24ms Dec 2 01:43:57 localhost kernel: devtmpfs: initialized Dec 2 01:43:57 localhost kernel: x86/mm: Memory block size: 128MB Dec 2 01:43:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 2 01:43:57 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 2 01:43:57 localhost kernel: pinctrl core: initialized pinctrl subsystem Dec 2 01:43:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 2 01:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 2 01:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 2 01:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 2 01:43:57 localhost kernel: audit: initializing netlink subsys (disabled) Dec 2 01:43:57 localhost kernel: audit: type=2000 audit(1764657836.538:1): state=initialized audit_enabled=0 res=1 Dec 2 01:43:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Dec 2 01:43:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 2 01:43:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Dec 2 01:43:57 localhost kernel: cpuidle: using governor menu Dec 2 01:43:57 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Dec 2 01:43:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 2 01:43:57 localhost kernel: PCI: Using configuration type 1 for base access Dec 2 01:43:57 localhost kernel: PCI: Using configuration type 1 for extended access Dec 2 01:43:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 2 01:43:57 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Dec 2 01:43:57 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 2 01:43:57 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 2 01:43:57 localhost kernel: cryptd: max_cpu_qlen set to 1000 Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Module Device) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Processor Device) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 2 01:43:57 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 2 01:43:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 2 01:43:57 localhost kernel: ACPI: Interpreter enabled Dec 2 01:43:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Dec 2 01:43:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing Dec 2 01:43:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 2 01:43:57 localhost kernel: PCI: Using E820 reservations for host bridge windows Dec 2 01:43:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 2 01:43:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 2 01:43:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Dec 2 01:43:57 localhost kernel: acpiphp: Slot [3] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [4] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [5] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [6] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [7] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [8] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [9] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [10] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [11] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [12] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [13] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [14] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [15] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [16] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [17] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [18] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [19] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [20] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [21] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [22] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [23] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [24] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [25] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [26] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [27] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [28] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [29] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [30] registered Dec 2 01:43:57 localhost kernel: acpiphp: Slot [31] registered Dec 2 01:43:57 localhost kernel: PCI host bridge to bus 0000:00 Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 2 01:43:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 2 01:43:57 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 2 01:43:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 2 01:43:57 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 2 01:43:57 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 2 01:43:57 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 2 01:43:57 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 2 01:43:57 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 2 01:43:57 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 2 01:43:57 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 2 01:43:57 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 2 01:43:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 2 01:43:57 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Dec 2 01:43:57 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 2 01:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 2 01:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 2 01:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 2 01:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 2 01:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 2 01:43:57 localhost kernel: iommu: Default domain type: Translated Dec 2 01:43:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 2 01:43:57 localhost kernel: SCSI subsystem initialized Dec 2 01:43:57 localhost kernel: ACPI: bus type USB registered Dec 2 01:43:57 localhost kernel: usbcore: registered new interface driver usbfs Dec 2 01:43:57 localhost kernel: usbcore: registered new interface driver hub Dec 2 01:43:57 localhost kernel: usbcore: registered new device driver usb Dec 2 01:43:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Dec 2 01:43:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 2 01:43:57 localhost kernel: PTP clock support registered Dec 2 01:43:57 localhost kernel: EDAC MC: Ver: 3.0.0 Dec 2 01:43:57 localhost kernel: NetLabel: Initializing Dec 2 01:43:57 localhost kernel: NetLabel: domain hash size = 128 Dec 2 01:43:57 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Dec 2 01:43:57 localhost kernel: NetLabel: unlabeled traffic allowed by default Dec 2 01:43:57 localhost kernel: PCI: Using ACPI for IRQ routing Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 2 01:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 2 01:43:57 localhost kernel: vgaarb: loaded Dec 2 01:43:57 localhost kernel: clocksource: Switched to clocksource kvm-clock Dec 2 01:43:57 localhost kernel: VFS: Disk quotas dquot_6.6.0 Dec 2 01:43:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 2 01:43:57 localhost kernel: pnp: PnP ACPI init Dec 2 01:43:57 localhost kernel: pnp: PnP ACPI: found 5 devices Dec 2 01:43:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 2 01:43:57 localhost kernel: NET: Registered PF_INET protocol family Dec 2 01:43:57 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 2 01:43:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 2 01:43:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 2 01:43:57 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 2 01:43:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 2 01:43:57 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 2 01:43:57 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Dec 2 01:43:57 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 2 01:43:57 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 2 01:43:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 2 01:43:57 localhost kernel: NET: Registered PF_XDP protocol family Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Dec 2 01:43:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Dec 2 01:43:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 2 01:43:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 2 01:43:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 2 01:43:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29073 usecs Dec 2 01:43:57 localhost kernel: PCI: CLS 0 bytes, default 64 Dec 2 01:43:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 2 01:43:57 localhost kernel: Trying to unpack rootfs image as initramfs... Dec 2 01:43:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Dec 2 01:43:57 localhost kernel: ACPI: bus type thunderbolt registered Dec 2 01:43:57 localhost kernel: Initialise system trusted keyrings Dec 2 01:43:57 localhost kernel: Key type blacklist registered Dec 2 01:43:57 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Dec 2 01:43:57 localhost kernel: zbud: loaded Dec 2 01:43:57 localhost kernel: integrity: Platform Keyring initialized Dec 2 01:43:57 localhost kernel: NET: Registered PF_ALG protocol family Dec 2 01:43:57 localhost kernel: xor: automatically using best checksumming function avx Dec 2 01:43:57 localhost kernel: Key type asymmetric registered Dec 2 01:43:57 localhost kernel: Asymmetric key parser 'x509' registered Dec 2 01:43:57 localhost kernel: Running certificate verification selftests Dec 2 01:43:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Dec 2 01:43:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Dec 2 01:43:57 localhost kernel: io scheduler mq-deadline registered Dec 2 01:43:57 localhost kernel: io scheduler kyber registered Dec 2 01:43:57 localhost kernel: io scheduler bfq registered Dec 2 01:43:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Dec 2 01:43:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Dec 2 01:43:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Dec 2 01:43:57 localhost kernel: ACPI: button: Power Button [PWRF] Dec 2 01:43:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 2 01:43:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 2 01:43:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 2 01:43:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 2 01:43:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 2 01:43:57 localhost kernel: Non-volatile memory driver v1.3 Dec 2 01:43:57 localhost kernel: rdac: device handler registered Dec 2 01:43:57 localhost kernel: hp_sw: device handler registered Dec 2 01:43:57 localhost kernel: emc: device handler registered Dec 2 01:43:57 localhost kernel: alua: device handler registered Dec 2 01:43:57 localhost kernel: libphy: Fixed MDIO Bus: probed Dec 2 01:43:57 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Dec 2 01:43:57 localhost kernel: ehci-pci: EHCI PCI platform driver Dec 2 01:43:57 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Dec 2 01:43:57 localhost kernel: ohci-pci: OHCI PCI platform driver Dec 2 01:43:57 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Dec 2 01:43:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 2 01:43:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 2 01:43:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 2 01:43:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Dec 2 01:43:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Dec 2 01:43:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Dec 2 01:43:57 localhost kernel: usb usb1: Product: UHCI Host Controller Dec 2 01:43:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Dec 2 01:43:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Dec 2 01:43:57 localhost kernel: hub 1-0:1.0: USB hub found Dec 2 01:43:57 localhost kernel: hub 1-0:1.0: 2 ports detected Dec 2 01:43:57 localhost kernel: usbcore: registered new interface driver usbserial_generic Dec 2 01:43:57 localhost kernel: usbserial: USB Serial support registered for generic Dec 2 01:43:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 2 01:43:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 2 01:43:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 2 01:43:57 localhost kernel: mousedev: PS/2 mouse device common for all mice Dec 2 01:43:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 2 01:43:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 2 01:43:57 localhost kernel: rtc_cmos 00:04: registered as rtc0 Dec 2 01:43:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-02T06:43:56 UTC (1764657836) Dec 2 01:43:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 2 01:43:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Dec 2 01:43:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Dec 2 01:43:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Dec 2 01:43:57 localhost kernel: usbcore: registered new interface driver usbhid Dec 2 01:43:57 localhost kernel: usbhid: USB HID core driver Dec 2 01:43:57 localhost kernel: drop_monitor: Initializing network drop monitor service Dec 2 01:43:57 localhost kernel: Initializing XFRM netlink socket Dec 2 01:43:57 localhost kernel: NET: Registered PF_INET6 protocol family Dec 2 01:43:57 localhost kernel: Segment Routing with IPv6 Dec 2 01:43:57 localhost kernel: NET: Registered PF_PACKET protocol family Dec 2 01:43:57 localhost kernel: mpls_gso: MPLS GSO support Dec 2 01:43:57 localhost kernel: IPI shorthand broadcast: enabled Dec 2 01:43:57 localhost kernel: AVX2 version of gcm_enc/dec engaged. Dec 2 01:43:57 localhost kernel: AES CTR mode by8 optimization enabled Dec 2 01:43:57 localhost kernel: sched_clock: Marking stable (775007730, 176358553)->(1092778420, -141412137) Dec 2 01:43:57 localhost kernel: registered taskstats version 1 Dec 2 01:43:57 localhost kernel: Loading compiled-in X.509 certificates Dec 2 01:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 2 01:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Dec 2 01:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Dec 2 01:43:57 localhost kernel: zswap: loaded using pool lzo/zbud Dec 2 01:43:57 localhost kernel: page_owner is disabled Dec 2 01:43:57 localhost kernel: Key type big_key registered Dec 2 01:43:57 localhost kernel: Freeing initrd memory: 74232K Dec 2 01:43:57 localhost kernel: Key type encrypted registered Dec 2 01:43:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Dec 2 01:43:57 localhost kernel: Loading compiled-in module X.509 certificates Dec 2 01:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 2 01:43:57 localhost kernel: ima: Allocated hash algorithm: sha256 Dec 2 01:43:57 localhost kernel: ima: No architecture policies found Dec 2 01:43:57 localhost kernel: evm: Initialising EVM extended attributes: Dec 2 01:43:57 localhost kernel: evm: security.selinux Dec 2 01:43:57 localhost kernel: evm: security.SMACK64 (disabled) Dec 2 01:43:57 localhost kernel: evm: security.SMACK64EXEC (disabled) Dec 2 01:43:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Dec 2 01:43:57 localhost kernel: evm: security.SMACK64MMAP (disabled) Dec 2 01:43:57 localhost kernel: evm: security.apparmor (disabled) Dec 2 01:43:57 localhost kernel: evm: security.ima Dec 2 01:43:57 localhost kernel: evm: security.capability Dec 2 01:43:57 localhost kernel: evm: HMAC attrs: 0x1 Dec 2 01:43:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 2 01:43:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Dec 2 01:43:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Dec 2 01:43:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet Dec 2 01:43:57 localhost kernel: usb 1-1: Manufacturer: QEMU Dec 2 01:43:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Dec 2 01:43:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Dec 2 01:43:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Dec 2 01:43:57 localhost kernel: Freeing unused decrypted memory: 2036K Dec 2 01:43:57 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Dec 2 01:43:57 localhost kernel: Write protecting the kernel read-only data: 26624k Dec 2 01:43:57 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 2 01:43:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Dec 2 01:43:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Dec 2 01:43:57 localhost kernel: Run /init as init process Dec 2 01:43:57 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 01:43:57 localhost systemd[1]: Detected virtualization kvm. Dec 2 01:43:57 localhost systemd[1]: Detected architecture x86-64. Dec 2 01:43:57 localhost systemd[1]: Running in initrd. Dec 2 01:43:57 localhost systemd[1]: No hostname configured, using default hostname. Dec 2 01:43:57 localhost systemd[1]: Hostname set to . Dec 2 01:43:57 localhost systemd[1]: Initializing machine ID from VM UUID. Dec 2 01:43:57 localhost systemd[1]: Queued start job for default target Initrd Default Target. Dec 2 01:43:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 2 01:43:57 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 2 01:43:57 localhost systemd[1]: Reached target Initrd /usr File System. Dec 2 01:43:57 localhost systemd[1]: Reached target Local File Systems. Dec 2 01:43:57 localhost systemd[1]: Reached target Path Units. Dec 2 01:43:57 localhost systemd[1]: Reached target Slice Units. Dec 2 01:43:57 localhost systemd[1]: Reached target Swaps. Dec 2 01:43:57 localhost systemd[1]: Reached target Timer Units. Dec 2 01:43:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 2 01:43:57 localhost systemd[1]: Listening on Journal Socket (/dev/log). Dec 2 01:43:57 localhost systemd[1]: Listening on Journal Socket. Dec 2 01:43:57 localhost systemd[1]: Listening on udev Control Socket. Dec 2 01:43:57 localhost systemd[1]: Listening on udev Kernel Socket. Dec 2 01:43:57 localhost systemd[1]: Reached target Socket Units. Dec 2 01:43:57 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 2 01:43:57 localhost systemd[1]: Starting Journal Service... Dec 2 01:43:57 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 01:43:57 localhost systemd[1]: Starting Create System Users... Dec 2 01:43:57 localhost systemd[1]: Starting Setup Virtual Console... Dec 2 01:43:57 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 2 01:43:57 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 01:43:57 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 01:43:57 localhost systemd-journald[282]: Journal started Dec 2 01:43:57 localhost systemd-journald[282]: Runtime Journal (/run/log/journal/64aa52087bf7490c857b3c1a3cae8bb3) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:43:57 localhost systemd-modules-load[283]: Module 'msr' is built in Dec 2 01:43:57 localhost systemd[1]: Started Journal Service. Dec 2 01:43:57 localhost systemd[1]: Finished Setup Virtual Console. Dec 2 01:43:57 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 01:43:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Dec 2 01:43:57 localhost systemd[1]: Starting dracut cmdline hook... Dec 2 01:43:57 localhost systemd-sysusers[284]: Creating group 'sgx' with GID 997. Dec 2 01:43:57 localhost systemd-sysusers[284]: Creating group 'users' with GID 100. Dec 2 01:43:57 localhost systemd-sysusers[284]: Creating group 'dbus' with GID 81. Dec 2 01:43:57 localhost systemd-sysusers[284]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Dec 2 01:43:57 localhost systemd[1]: Finished Create System Users. Dec 2 01:43:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 2 01:43:57 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Dec 2 01:43:57 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 2 01:43:57 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 2 01:43:57 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 2 01:43:57 localhost systemd[1]: Finished dracut cmdline hook. Dec 2 01:43:57 localhost systemd[1]: Starting dracut pre-udev hook... Dec 2 01:43:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 2 01:43:57 localhost kernel: device-mapper: uevent: version 1.0.3 Dec 2 01:43:57 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Dec 2 01:43:57 localhost kernel: RPC: Registered named UNIX socket transport module. Dec 2 01:43:57 localhost kernel: RPC: Registered udp transport module. Dec 2 01:43:57 localhost kernel: RPC: Registered tcp transport module. Dec 2 01:43:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 2 01:43:57 localhost rpc.statd[404]: Version 2.5.4 starting Dec 2 01:43:57 localhost rpc.statd[404]: Initializing NSM state Dec 2 01:43:57 localhost rpc.idmapd[409]: Setting log level to 0 Dec 2 01:43:57 localhost systemd[1]: Finished dracut pre-udev hook. Dec 2 01:43:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 01:43:57 localhost systemd-udevd[422]: Using default interface naming scheme 'rhel-9.0'. Dec 2 01:43:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 01:43:57 localhost systemd[1]: Starting dracut pre-trigger hook... Dec 2 01:43:57 localhost systemd[1]: Finished dracut pre-trigger hook. Dec 2 01:43:57 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 2 01:43:57 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 2 01:43:57 localhost systemd[1]: Reached target System Initialization. Dec 2 01:43:57 localhost systemd[1]: Reached target Basic System. Dec 2 01:43:57 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 2 01:43:57 localhost systemd[1]: Reached target Network. Dec 2 01:43:57 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 2 01:43:57 localhost systemd[1]: Starting dracut initqueue hook... Dec 2 01:43:57 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Dec 2 01:43:57 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 2 01:43:57 localhost kernel: GPT:20971519 != 838860799 Dec 2 01:43:57 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Dec 2 01:43:57 localhost kernel: scsi host0: ata_piix Dec 2 01:43:57 localhost kernel: GPT:20971519 != 838860799 Dec 2 01:43:57 localhost systemd-udevd[440]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:43:57 localhost kernel: scsi host1: ata_piix Dec 2 01:43:57 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 2 01:43:57 localhost kernel: vda: vda1 vda2 vda3 vda4 Dec 2 01:43:57 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Dec 2 01:43:57 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Dec 2 01:43:57 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 2 01:43:57 localhost systemd[1]: Reached target Initrd Root Device. Dec 2 01:43:58 localhost kernel: ata1: found unknown device (class 0) Dec 2 01:43:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 2 01:43:58 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 2 01:43:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Dec 2 01:43:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 2 01:43:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 2 01:43:58 localhost systemd[1]: Finished dracut initqueue hook. Dec 2 01:43:58 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 2 01:43:58 localhost systemd[1]: Reached target Remote Encrypted Volumes. Dec 2 01:43:58 localhost systemd[1]: Reached target Remote File Systems. Dec 2 01:43:58 localhost systemd[1]: Starting dracut pre-mount hook... Dec 2 01:43:58 localhost systemd[1]: Finished dracut pre-mount hook. Dec 2 01:43:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Dec 2 01:43:58 localhost systemd-fsck[513]: /usr/sbin/fsck.xfs: XFS file system. Dec 2 01:43:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 2 01:43:58 localhost systemd[1]: Mounting /sysroot... Dec 2 01:43:58 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Dec 2 01:43:58 localhost kernel: XFS (vda4): Mounting V5 Filesystem Dec 2 01:43:58 localhost kernel: XFS (vda4): Ending clean mount Dec 2 01:43:58 localhost systemd[1]: Mounted /sysroot. Dec 2 01:43:58 localhost systemd[1]: Reached target Initrd Root File System. Dec 2 01:43:58 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Dec 2 01:43:58 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Dec 2 01:43:58 localhost systemd[1]: Reached target Initrd File Systems. Dec 2 01:43:58 localhost systemd[1]: Reached target Initrd Default Target. Dec 2 01:43:58 localhost systemd[1]: Starting dracut mount hook... Dec 2 01:43:58 localhost systemd[1]: Finished dracut mount hook. Dec 2 01:43:58 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Dec 2 01:43:58 localhost rpc.idmapd[409]: exiting on signal 15 Dec 2 01:43:58 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Dec 2 01:43:58 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Dec 2 01:43:58 localhost systemd[1]: Stopped target Network. Dec 2 01:43:58 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Dec 2 01:43:58 localhost systemd[1]: Stopped target Timer Units. Dec 2 01:43:58 localhost systemd[1]: dbus.socket: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Dec 2 01:43:58 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Dec 2 01:43:58 localhost systemd[1]: Stopped target Initrd Default Target. Dec 2 01:43:58 localhost systemd[1]: Stopped target Basic System. Dec 2 01:43:58 localhost systemd[1]: Stopped target Initrd Root Device. Dec 2 01:43:58 localhost systemd[1]: Stopped target Initrd /usr File System. Dec 2 01:43:58 localhost systemd[1]: Stopped target Path Units. Dec 2 01:43:58 localhost systemd[1]: Stopped target Remote File Systems. Dec 2 01:43:58 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Dec 2 01:43:58 localhost systemd[1]: Stopped target Slice Units. Dec 2 01:43:58 localhost systemd[1]: Stopped target Socket Units. Dec 2 01:43:58 localhost systemd[1]: Stopped target System Initialization. Dec 2 01:43:58 localhost systemd[1]: Stopped target Local File Systems. Dec 2 01:43:58 localhost systemd[1]: Stopped target Swaps. Dec 2 01:43:58 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut mount hook. Dec 2 01:43:58 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut pre-mount hook. Dec 2 01:43:58 localhost systemd[1]: Stopped target Local Encrypted Volumes. Dec 2 01:43:58 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Dec 2 01:43:58 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut initqueue hook. Dec 2 01:43:58 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 01:43:58 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 01:43:58 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Create Volatile Files and Directories. Dec 2 01:43:58 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Coldplug All udev Devices. Dec 2 01:43:58 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut pre-trigger hook. Dec 2 01:43:58 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 2 01:43:58 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Setup Virtual Console. Dec 2 01:43:58 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Dec 2 01:43:58 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 2 01:43:58 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Closed udev Control Socket. Dec 2 01:43:58 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Closed udev Kernel Socket. Dec 2 01:43:58 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut pre-udev hook. Dec 2 01:43:58 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped dracut cmdline hook. Dec 2 01:43:58 localhost systemd[1]: Starting Cleanup udev Database... Dec 2 01:43:58 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Dec 2 01:43:58 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Create List of Static Device Nodes. Dec 2 01:43:58 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Stopped Create System Users. Dec 2 01:43:58 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 2 01:43:58 localhost systemd[1]: Finished Cleanup udev Database. Dec 2 01:43:58 localhost systemd[1]: Reached target Switch Root. Dec 2 01:43:58 localhost systemd[1]: Starting Switch Root... Dec 2 01:43:58 localhost systemd[1]: Switching root. Dec 2 01:43:58 localhost systemd-journald[282]: Journal stopped Dec 2 01:43:59 localhost systemd-journald[282]: Received SIGTERM from PID 1 (systemd). Dec 2 01:43:59 localhost kernel: audit: type=1404 audit(1764657838.973:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 01:43:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 01:43:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 01:43:59 localhost kernel: audit: type=1403 audit(1764657839.066:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 2 01:43:59 localhost systemd[1]: Successfully loaded SELinux policy in 95.936ms. Dec 2 01:43:59 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.990ms. Dec 2 01:43:59 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 01:43:59 localhost systemd[1]: Detected virtualization kvm. Dec 2 01:43:59 localhost systemd[1]: Detected architecture x86-64. Dec 2 01:43:59 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 01:43:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 01:43:59 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd[1]: Stopped Switch Root. Dec 2 01:43:59 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 2 01:43:59 localhost systemd[1]: Created slice Slice /system/getty. Dec 2 01:43:59 localhost systemd[1]: Created slice Slice /system/modprobe. Dec 2 01:43:59 localhost systemd[1]: Created slice Slice /system/serial-getty. Dec 2 01:43:59 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Dec 2 01:43:59 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Dec 2 01:43:59 localhost systemd[1]: Created slice User and Session Slice. Dec 2 01:43:59 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 2 01:43:59 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Dec 2 01:43:59 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Dec 2 01:43:59 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 2 01:43:59 localhost systemd[1]: Stopped target Switch Root. Dec 2 01:43:59 localhost systemd[1]: Stopped target Initrd File Systems. Dec 2 01:43:59 localhost systemd[1]: Stopped target Initrd Root File System. Dec 2 01:43:59 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Dec 2 01:43:59 localhost systemd[1]: Reached target Path Units. Dec 2 01:43:59 localhost systemd[1]: Reached target rpc_pipefs.target. Dec 2 01:43:59 localhost systemd[1]: Reached target Slice Units. Dec 2 01:43:59 localhost systemd[1]: Reached target Swaps. Dec 2 01:43:59 localhost systemd[1]: Reached target Local Verity Protected Volumes. Dec 2 01:43:59 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Dec 2 01:43:59 localhost systemd[1]: Reached target RPC Port Mapper. Dec 2 01:43:59 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 2 01:43:59 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Dec 2 01:43:59 localhost systemd[1]: Listening on udev Control Socket. Dec 2 01:43:59 localhost systemd[1]: Listening on udev Kernel Socket. Dec 2 01:43:59 localhost systemd[1]: Mounting Huge Pages File System... Dec 2 01:43:59 localhost systemd[1]: Mounting POSIX Message Queue File System... Dec 2 01:43:59 localhost systemd[1]: Mounting Kernel Debug File System... Dec 2 01:43:59 localhost systemd[1]: Mounting Kernel Trace File System... Dec 2 01:43:59 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 2 01:43:59 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 2 01:43:59 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 2 01:43:59 localhost systemd[1]: Starting Load Kernel Module drm... Dec 2 01:43:59 localhost systemd[1]: Starting Load Kernel Module fuse... Dec 2 01:43:59 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Dec 2 01:43:59 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd[1]: Stopped File System Check on Root Device. Dec 2 01:43:59 localhost systemd[1]: Stopped Journal Service. Dec 2 01:43:59 localhost systemd[1]: Starting Journal Service... Dec 2 01:43:59 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 01:43:59 localhost systemd[1]: Starting Generate network units from Kernel command line... Dec 2 01:43:59 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Dec 2 01:43:59 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Dec 2 01:43:59 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 2 01:43:59 localhost kernel: fuse: init (API version 7.36) Dec 2 01:43:59 localhost systemd-journald[619]: Journal started Dec 2 01:43:59 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:43:59 localhost systemd[1]: Queued start job for default target Multi-User System. Dec 2 01:43:59 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd-modules-load[620]: Module 'msr' is built in Dec 2 01:43:59 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Dec 2 01:43:59 localhost systemd[1]: Started Journal Service. Dec 2 01:43:59 localhost systemd[1]: Mounted Huge Pages File System. Dec 2 01:43:59 localhost systemd[1]: Mounted POSIX Message Queue File System. Dec 2 01:43:59 localhost systemd[1]: Mounted Kernel Debug File System. Dec 2 01:43:59 localhost systemd[1]: Mounted Kernel Trace File System. Dec 2 01:43:59 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 2 01:43:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 2 01:43:59 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd[1]: Finished Load Kernel Module fuse. Dec 2 01:43:59 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Dec 2 01:43:59 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 01:43:59 localhost systemd[1]: Finished Generate network units from Kernel command line. Dec 2 01:43:59 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Dec 2 01:43:59 localhost systemd[1]: Mounting FUSE Control File System... Dec 2 01:43:59 localhost systemd[1]: Mounting Kernel Configuration File System... Dec 2 01:43:59 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 2 01:43:59 localhost systemd[1]: Starting Rebuild Hardware Database... Dec 2 01:43:59 localhost kernel: ACPI: bus type drm_connector registered Dec 2 01:43:59 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Dec 2 01:43:59 localhost systemd[1]: Starting Load/Save Random Seed... Dec 2 01:43:59 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 01:43:59 localhost systemd[1]: Starting Create System Users... Dec 2 01:43:59 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:43:59 localhost systemd-journald[619]: Received client request to flush runtime journal. Dec 2 01:43:59 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 2 01:43:59 localhost systemd[1]: Finished Load Kernel Module drm. Dec 2 01:43:59 localhost systemd[1]: Mounted FUSE Control File System. Dec 2 01:43:59 localhost systemd[1]: Mounted Kernel Configuration File System. Dec 2 01:43:59 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Dec 2 01:43:59 localhost systemd[1]: Finished Load/Save Random Seed. Dec 2 01:43:59 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 2 01:43:59 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 01:43:59 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Dec 2 01:43:59 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Dec 2 01:43:59 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Dec 2 01:43:59 localhost systemd[1]: Finished Create System Users. Dec 2 01:43:59 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 2 01:43:59 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 2 01:43:59 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 2 01:43:59 localhost systemd[1]: Reached target Preparation for Local File Systems. Dec 2 01:43:59 localhost systemd[1]: Set up automount EFI System Partition Automount. Dec 2 01:44:00 localhost systemd[1]: Finished Rebuild Hardware Database. Dec 2 01:44:00 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 01:44:00 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'. Dec 2 01:44:00 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 01:44:00 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 2 01:44:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 2 01:44:00 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Dec 2 01:44:00 localhost systemd-udevd[639]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:44:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Dec 2 01:44:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Dec 2 01:44:00 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Dec 2 01:44:00 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31) Dec 2 01:44:00 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters Dec 2 01:44:00 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Dec 2 01:44:00 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 2 01:44:00 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Dec 2 01:44:00 localhost kernel: SVM: TSC scaling supported Dec 2 01:44:00 localhost kernel: kvm: Nested Virtualization enabled Dec 2 01:44:00 localhost kernel: SVM: kvm: Nested Paging enabled Dec 2 01:44:00 localhost kernel: SVM: LBR virtualization supported Dec 2 01:44:00 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 2 01:44:00 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 2 01:44:00 localhost kernel: Console: switching to colour dummy device 80x25 Dec 2 01:44:00 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 2 01:44:00 localhost kernel: [drm] features: -context_init Dec 2 01:44:00 localhost kernel: [drm] number of scanouts: 1 Dec 2 01:44:00 localhost kernel: [drm] number of cap sets: 0 Dec 2 01:44:00 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Dec 2 01:44:00 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Dec 2 01:44:00 localhost kernel: Console: switching to colour frame buffer device 128x48 Dec 2 01:44:00 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 2 01:44:00 localhost systemd[1]: Mounting /boot... Dec 2 01:44:00 localhost kernel: XFS (vda3): Mounting V5 Filesystem Dec 2 01:44:00 localhost kernel: XFS (vda3): Ending clean mount Dec 2 01:44:00 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Dec 2 01:44:00 localhost systemd[1]: Mounted /boot. Dec 2 01:44:00 localhost systemd[1]: Mounting /boot/efi... Dec 2 01:44:00 localhost systemd[1]: Mounted /boot/efi. Dec 2 01:44:00 localhost systemd[1]: Reached target Local File Systems. Dec 2 01:44:00 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Dec 2 01:44:00 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Dec 2 01:44:00 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 2 01:44:00 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:00 localhost systemd[1]: Starting Automatic Boot Loader Update... Dec 2 01:44:00 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Dec 2 01:44:00 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 2 01:44:00 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 715 (bootctl) Dec 2 01:44:00 localhost systemd[1]: Starting File System Check on /dev/vda2... Dec 2 01:44:00 localhost systemd[1]: Finished File System Check on /dev/vda2. Dec 2 01:44:00 localhost systemd[1]: Mounting EFI System Partition Automount... Dec 2 01:44:00 localhost systemd[1]: Mounted EFI System Partition Automount. Dec 2 01:44:00 localhost systemd[1]: Finished Automatic Boot Loader Update. Dec 2 01:44:00 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 2 01:44:00 localhost systemd[1]: Starting Security Auditing Service... Dec 2 01:44:00 localhost systemd[1]: Starting RPC Bind... Dec 2 01:44:00 localhost systemd[1]: Starting Rebuild Journal Catalog... Dec 2 01:44:00 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Dec 2 01:44:00 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Dec 2 01:44:00 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable) Dec 2 01:44:00 localhost systemd[1]: Finished Rebuild Journal Catalog. Dec 2 01:44:00 localhost systemd[1]: Started RPC Bind. Dec 2 01:44:00 localhost systemd[1]: Starting Update is Completed... Dec 2 01:44:00 localhost systemd[1]: Finished Update is Completed. Dec 2 01:44:00 localhost augenrules[731]: /sbin/augenrules: No change Dec 2 01:44:00 localhost augenrules[742]: No rules Dec 2 01:44:00 localhost augenrules[742]: enabled 1 Dec 2 01:44:00 localhost augenrules[742]: failure 1 Dec 2 01:44:00 localhost augenrules[742]: pid 726 Dec 2 01:44:00 localhost augenrules[742]: rate_limit 0 Dec 2 01:44:00 localhost augenrules[742]: backlog_limit 8192 Dec 2 01:44:00 localhost augenrules[742]: lost 0 Dec 2 01:44:00 localhost augenrules[742]: backlog 3 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time 60000 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time_actual 0 Dec 2 01:44:00 localhost augenrules[742]: enabled 1 Dec 2 01:44:00 localhost augenrules[742]: failure 1 Dec 2 01:44:00 localhost augenrules[742]: pid 726 Dec 2 01:44:00 localhost augenrules[742]: rate_limit 0 Dec 2 01:44:00 localhost augenrules[742]: backlog_limit 8192 Dec 2 01:44:00 localhost augenrules[742]: lost 0 Dec 2 01:44:00 localhost augenrules[742]: backlog 2 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time 60000 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time_actual 0 Dec 2 01:44:00 localhost augenrules[742]: enabled 1 Dec 2 01:44:00 localhost augenrules[742]: failure 1 Dec 2 01:44:00 localhost augenrules[742]: pid 726 Dec 2 01:44:00 localhost augenrules[742]: rate_limit 0 Dec 2 01:44:00 localhost augenrules[742]: backlog_limit 8192 Dec 2 01:44:00 localhost augenrules[742]: lost 0 Dec 2 01:44:00 localhost augenrules[742]: backlog 0 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time 60000 Dec 2 01:44:00 localhost augenrules[742]: backlog_wait_time_actual 0 Dec 2 01:44:00 localhost systemd[1]: Started Security Auditing Service. Dec 2 01:44:00 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Dec 2 01:44:00 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Dec 2 01:44:00 localhost systemd[1]: Reached target System Initialization. Dec 2 01:44:00 localhost systemd[1]: Started dnf makecache --timer. Dec 2 01:44:00 localhost systemd[1]: Started Daily rotation of log files. Dec 2 01:44:00 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 2 01:44:00 localhost systemd[1]: Reached target Timer Units. Dec 2 01:44:00 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 2 01:44:00 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Dec 2 01:44:00 localhost systemd[1]: Reached target Socket Units. Dec 2 01:44:00 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Dec 2 01:44:00 localhost systemd[1]: Starting D-Bus System Message Bus... Dec 2 01:44:00 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:00 localhost systemd[1]: Started D-Bus System Message Bus. Dec 2 01:44:00 localhost systemd[1]: Reached target Basic System. Dec 2 01:44:00 localhost systemd[1]: Starting NTP client/server... Dec 2 01:44:00 localhost journal[751]: Ready Dec 2 01:44:00 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Dec 2 01:44:00 localhost systemd[1]: Started irqbalance daemon. Dec 2 01:44:00 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Dec 2 01:44:00 localhost systemd[1]: Starting System Logging Service... Dec 2 01:44:00 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:00 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:00 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:00 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 01:44:00 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Dec 2 01:44:00 localhost systemd[1]: Reached target User and Group Name Lookups. Dec 2 01:44:00 localhost systemd[1]: Starting User Login Management... Dec 2 01:44:00 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Dec 2 01:44:00 localhost systemd[1]: Started System Logging Service. Dec 2 01:44:00 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Dec 2 01:44:00 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Dec 2 01:44:00 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 01:44:00 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Dec 2 01:44:00 localhost chronyd[766]: Loaded seccomp filter (level 2) Dec 2 01:44:00 localhost systemd-logind[760]: New seat seat0. Dec 2 01:44:00 localhost systemd[1]: Started NTP client/server. Dec 2 01:44:00 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Dec 2 01:44:00 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 2 01:44:00 localhost systemd[1]: Started User Login Management. Dec 2 01:44:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 01:44:01 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Tue, 02 Dec 2025 06:44:01 +0000. Up 5.44 seconds. Dec 2 01:44:01 localhost systemd[1]: Starting Hostname Service... Dec 2 01:44:01 localhost systemd[1]: Started Hostname Service. Dec 2 01:44:01 localhost systemd-hostnamed[785]: Hostname set to (static) Dec 2 01:44:01 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Dec 2 01:44:01 localhost systemd[1]: Reached target Preparation for Network. Dec 2 01:44:01 localhost systemd[1]: Starting Network Manager... Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.6800] NetworkManager (version 1.42.2-1.el9) is starting... (boot:9c01ca59-fcb0-40d3-99c4-2690b78cc18a) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.6805] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.6825] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 2 01:44:01 localhost systemd[1]: Started Network Manager. Dec 2 01:44:01 localhost systemd[1]: Reached target Network. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7085] manager[0x5625da2b3020]: monitoring kernel firmware directory '/lib/firmware'. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7111] hostname: hostname: using hostnamed Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7111] hostname: static hostname changed from (none) to "np0005541914.novalocal" Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7115] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 2 01:44:01 localhost systemd[1]: Starting Network Manager Wait Online... Dec 2 01:44:01 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Dec 2 01:44:01 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Dec 2 01:44:01 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7301] manager[0x5625da2b3020]: rfkill: Wi-Fi hardware radio set enabled Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7303] manager[0x5625da2b3020]: rfkill: WWAN hardware radio set enabled Dec 2 01:44:01 localhost systemd[1]: Started GSSAPI Proxy Daemon. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7356] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7357] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7360] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7362] manager: Networking is enabled by state file Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7378] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7379] settings: Loaded settings plugin: keyfile (internal) Dec 2 01:44:01 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7408] dhcp: init: Using DHCP client 'internal' Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7413] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7428] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7434] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7444] device (lo): Activation: starting connection 'lo' (7d726e81-3c90-4757-9293-46e316e2c15c) Dec 2 01:44:01 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7454] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7458] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7494] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7499] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7501] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7503] device (eth0): carrier: link connected Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7506] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7511] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 2 01:44:01 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:44:01 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 2 01:44:01 localhost systemd[1]: Reached target NFS client services. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7551] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7556] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7557] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7560] manager: NetworkManager state is now CONNECTING Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7561] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7571] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7576] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:44:01 localhost systemd[1]: Reached target Remote File Systems. Dec 2 01:44:01 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7648] dhcp4 (eth0): state changed new lease, address=38.102.83.204 Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7654] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7681] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7819] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7822] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7830] device (lo): Activation: successful, device activated. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7841] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7844] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7850] manager: NetworkManager state is now CONNECTED_SITE Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7856] device (eth0): Activation: successful, device activated. Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7865] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 2 01:44:01 localhost NetworkManager[790]: [1764657841.7872] manager: startup complete Dec 2 01:44:01 localhost systemd[1]: Finished Network Manager Wait Online. Dec 2 01:44:01 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Dec 2 01:44:02 localhost cloud-init[985]: Cloud-init v. 22.1-9.el9 running 'init' at Tue, 02 Dec 2025 06:44:02 +0000. Up 6.26 seconds. Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | eth0 | True | 38.102.83.204 | 255.255.255.0 | global | fa:16:3e:75:1e:f2 | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | eth0 | True | fe80::f816:3eff:fe75:1ef2/64 | . | link | fa:16:3e:75:1e:f2 | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | lo | True | ::1/128 | . | host | . | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | Route | Destination | Gateway | Interface | Flags | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: | 3 | multicast | :: | eth0 | U | Dec 2 01:44:02 localhost cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:02 localhost systemd[1]: Starting Authorization Manager... Dec 2 01:44:02 localhost polkitd[1037]: Started polkitd version 0.117 Dec 2 01:44:02 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 01:44:02 localhost systemd[1]: Started Authorization Manager. Dec 2 01:44:04 localhost cloud-init[985]: Generating public/private rsa key pair. Dec 2 01:44:04 localhost cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Dec 2 01:44:04 localhost cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Dec 2 01:44:04 localhost cloud-init[985]: The key fingerprint is: Dec 2 01:44:04 localhost cloud-init[985]: SHA256:xdbtppxT4ltQpGJoBJx3yaMDScjC2rd4cY4FYBNAH6M root@np0005541914.novalocal Dec 2 01:44:04 localhost cloud-init[985]: The key's randomart image is: Dec 2 01:44:04 localhost cloud-init[985]: +---[RSA 3072]----+ Dec 2 01:44:04 localhost cloud-init[985]: |o+=* +o+.. . . | Dec 2 01:44:04 localhost cloud-init[985]: | .=.* =..o=. + | Dec 2 01:44:04 localhost cloud-init[985]: | E o . ooo*.o o | Dec 2 01:44:04 localhost cloud-init[985]: |. . o o.o+ . o | Dec 2 01:44:04 localhost cloud-init[985]: | o B S. o + | Dec 2 01:44:04 localhost cloud-init[985]: | . + . o B | Dec 2 01:44:04 localhost cloud-init[985]: | . * . | Dec 2 01:44:04 localhost cloud-init[985]: | + | Dec 2 01:44:04 localhost cloud-init[985]: | . | Dec 2 01:44:04 localhost cloud-init[985]: +----[SHA256]-----+ Dec 2 01:44:04 localhost cloud-init[985]: Generating public/private ecdsa key pair. Dec 2 01:44:04 localhost cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Dec 2 01:44:04 localhost cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Dec 2 01:44:04 localhost cloud-init[985]: The key fingerprint is: Dec 2 01:44:04 localhost cloud-init[985]: SHA256:2VElGY9kbaYJV8HvChLADuIAvZjsxxlz55v2L3TV5i0 root@np0005541914.novalocal Dec 2 01:44:04 localhost cloud-init[985]: The key's randomart image is: Dec 2 01:44:04 localhost cloud-init[985]: +---[ECDSA 256]---+ Dec 2 01:44:04 localhost cloud-init[985]: |.o . *B+. | Dec 2 01:44:04 localhost cloud-init[985]: | o . . o .+o+= | Dec 2 01:44:04 localhost cloud-init[985]: |.o + . o ..o.=o. | Dec 2 01:44:04 localhost cloud-init[985]: |o..o.. ..o..o. o.| Dec 2 01:44:04 localhost cloud-init[985]: |. . = o S ... o..| Dec 2 01:44:04 localhost cloud-init[985]: | . + . .... E.o| Dec 2 01:44:04 localhost cloud-init[985]: | . + .. . o | Dec 2 01:44:04 localhost cloud-init[985]: | + . . | Dec 2 01:44:04 localhost cloud-init[985]: | . ..o. | Dec 2 01:44:04 localhost cloud-init[985]: +----[SHA256]-----+ Dec 2 01:44:04 localhost cloud-init[985]: Generating public/private ed25519 key pair. Dec 2 01:44:04 localhost cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Dec 2 01:44:04 localhost cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Dec 2 01:44:04 localhost cloud-init[985]: The key fingerprint is: Dec 2 01:44:04 localhost cloud-init[985]: SHA256:D2w0SBSn0CoE5kRQMlrgRRjLG27KaVSXQyxgiAmlXn4 root@np0005541914.novalocal Dec 2 01:44:04 localhost cloud-init[985]: The key's randomart image is: Dec 2 01:44:04 localhost cloud-init[985]: +--[ED25519 256]--+ Dec 2 01:44:04 localhost cloud-init[985]: |&&Ooo++.. | Dec 2 01:44:04 localhost cloud-init[985]: |%*+..+o+ | Dec 2 01:44:04 localhost cloud-init[985]: |oB o.=o o | Dec 2 01:44:04 localhost cloud-init[985]: |o B o .o . | Dec 2 01:44:04 localhost cloud-init[985]: | * o E S | Dec 2 01:44:04 localhost cloud-init[985]: |= . . . o | Dec 2 01:44:04 localhost cloud-init[985]: |.+ . | Dec 2 01:44:04 localhost cloud-init[985]: |. | Dec 2 01:44:04 localhost cloud-init[985]: | | Dec 2 01:44:04 localhost cloud-init[985]: +----[SHA256]-----+ Dec 2 01:44:04 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Dec 2 01:44:04 localhost systemd[1]: Reached target Cloud-config availability. Dec 2 01:44:04 localhost systemd[1]: Reached target Network is Online. Dec 2 01:44:04 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Dec 2 01:44:04 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Dec 2 01:44:04 localhost systemd[1]: Starting Crash recovery kernel arming... Dec 2 01:44:04 localhost systemd[1]: Starting Notify NFS peers of a restart... Dec 2 01:44:04 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 01:44:04 localhost sm-notify[1129]: Version 2.5.4 starting Dec 2 01:44:04 localhost systemd[1]: Starting Permit User Sessions... Dec 2 01:44:04 localhost systemd[1]: Started Notify NFS peers of a restart. Dec 2 01:44:04 localhost systemd[1]: Finished Permit User Sessions. Dec 2 01:44:04 localhost sshd[1130]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost systemd[1]: Started Command Scheduler. Dec 2 01:44:04 localhost systemd[1]: Started Getty on tty1. Dec 2 01:44:04 localhost systemd[1]: Started Serial Getty on ttyS0. Dec 2 01:44:04 localhost systemd[1]: Reached target Login Prompts. Dec 2 01:44:04 localhost sshd[1134]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 01:44:04 localhost systemd[1]: Reached target Multi-User System. Dec 2 01:44:04 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Dec 2 01:44:04 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 2 01:44:04 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Dec 2 01:44:04 localhost sshd[1146]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost sshd[1159]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost sshd[1170]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost sshd[1188]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost kdumpctl[1135]: kdump: No kdump initial ramdisk found. Dec 2 01:44:04 localhost kdumpctl[1135]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Dec 2 01:44:04 localhost sshd[1198]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost sshd[1220]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost sshd[1253]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost cloud-init[1254]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Tue, 02 Dec 2025 06:44:04 +0000. Up 8.89 seconds. Dec 2 01:44:04 localhost sshd[1262]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:04 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Dec 2 01:44:04 localhost systemd[1]: Starting Execute cloud user/final scripts... Dec 2 01:44:05 localhost dracut[1432]: dracut-057-21.git20230214.el9 Dec 2 01:44:05 localhost cloud-init[1450]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Tue, 02 Dec 2025 06:44:05 +0000. Up 9.24 seconds. Dec 2 01:44:05 localhost dracut[1434]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Dec 2 01:44:05 localhost cloud-init[1472]: ############################################################# Dec 2 01:44:05 localhost cloud-init[1474]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Dec 2 01:44:05 localhost cloud-init[1482]: 256 SHA256:2VElGY9kbaYJV8HvChLADuIAvZjsxxlz55v2L3TV5i0 root@np0005541914.novalocal (ECDSA) Dec 2 01:44:05 localhost cloud-init[1489]: 256 SHA256:D2w0SBSn0CoE5kRQMlrgRRjLG27KaVSXQyxgiAmlXn4 root@np0005541914.novalocal (ED25519) Dec 2 01:44:05 localhost cloud-init[1496]: 3072 SHA256:xdbtppxT4ltQpGJoBJx3yaMDScjC2rd4cY4FYBNAH6M root@np0005541914.novalocal (RSA) Dec 2 01:44:05 localhost cloud-init[1497]: -----END SSH HOST KEY FINGERPRINTS----- Dec 2 01:44:05 localhost cloud-init[1500]: ############################################################# Dec 2 01:44:05 localhost cloud-init[1450]: Cloud-init v. 22.1-9.el9 finished at Tue, 02 Dec 2025 06:44:05 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 9.49 seconds Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Dec 2 01:44:05 localhost systemd[1]: Reloading Network Manager... Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 2 01:44:05 localhost NetworkManager[790]: [1764657845.3663] audit: op="reload" arg="0" pid=1581 uid=0 result="success" Dec 2 01:44:05 localhost NetworkManager[790]: [1764657845.3672] config: signal: SIGHUP (no changes from disk) Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 2 01:44:05 localhost systemd[1]: Reloaded Network Manager. Dec 2 01:44:05 localhost systemd[1]: Finished Execute cloud user/final scripts. Dec 2 01:44:05 localhost systemd[1]: Reached target Cloud-init target. Dec 2 01:44:05 localhost dracut[1434]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 2 01:44:05 localhost dracut[1434]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 2 01:44:05 localhost dracut[1434]: memstrack is not available Dec 2 01:44:05 localhost dracut[1434]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 2 01:44:05 localhost dracut[1434]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 2 01:44:05 localhost dracut[1434]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 2 01:44:06 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 2 01:44:06 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 2 01:44:06 localhost dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 2 01:44:06 localhost dracut[1434]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 2 01:44:06 localhost dracut[1434]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 2 01:44:06 localhost dracut[1434]: memstrack is not available Dec 2 01:44:06 localhost dracut[1434]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 2 01:44:06 localhost dracut[1434]: *** Including module: systemd *** Dec 2 01:44:06 localhost dracut[1434]: *** Including module: systemd-initrd *** Dec 2 01:44:06 localhost dracut[1434]: *** Including module: i18n *** Dec 2 01:44:06 localhost dracut[1434]: No KEYMAP configured. Dec 2 01:44:06 localhost dracut[1434]: *** Including module: drm *** Dec 2 01:44:06 localhost chronyd[766]: Selected source 174.138.193.90 (2.rhel.pool.ntp.org) Dec 2 01:44:06 localhost chronyd[766]: System clock TAI offset set to 37 seconds Dec 2 01:44:06 localhost dracut[1434]: *** Including module: prefixdevname *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: kernel-modules *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: kernel-modules-extra *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: qemu *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: fstab-sys *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: rootfs-block *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: terminfo *** Dec 2 01:44:07 localhost dracut[1434]: *** Including module: udev-rules *** Dec 2 01:44:08 localhost dracut[1434]: Skipping udev rule: 91-permissions.rules Dec 2 01:44:08 localhost dracut[1434]: Skipping udev rule: 80-drivers-modprobe.rules Dec 2 01:44:08 localhost dracut[1434]: *** Including module: virtiofs *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: dracut-systemd *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: usrmount *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: base *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: fs-lib *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: kdumpbase *** Dec 2 01:44:08 localhost dracut[1434]: *** Including module: microcode_ctl-fw_dir_override *** Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl module: mangling fw_dir Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: configuration "intel" is ignored Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: configuration "intel-06-2d-07" is ignored Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: configuration "intel-06-4e-03" is ignored Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: configuration "intel-06-4f-01" is ignored Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: configuration "intel-06-55-04" is ignored Dec 2 01:44:08 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: configuration "intel-06-5e-03" is ignored Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: configuration "intel-06-8c-01" is ignored Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Dec 2 01:44:09 localhost dracut[1434]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Dec 2 01:44:09 localhost dracut[1434]: *** Including module: shutdown *** Dec 2 01:44:09 localhost dracut[1434]: *** Including module: squash *** Dec 2 01:44:09 localhost dracut[1434]: *** Including modules done *** Dec 2 01:44:09 localhost dracut[1434]: *** Installing kernel module dependencies *** Dec 2 01:44:09 localhost dracut[1434]: *** Installing kernel module dependencies done *** Dec 2 01:44:09 localhost dracut[1434]: *** Resolving executable dependencies *** Dec 2 01:44:11 localhost dracut[1434]: *** Resolving executable dependencies done *** Dec 2 01:44:11 localhost dracut[1434]: *** Hardlinking files *** Dec 2 01:44:11 localhost dracut[1434]: Mode: real Dec 2 01:44:11 localhost dracut[1434]: Files: 1099 Dec 2 01:44:11 localhost dracut[1434]: Linked: 3 files Dec 2 01:44:11 localhost dracut[1434]: Compared: 0 xattrs Dec 2 01:44:11 localhost dracut[1434]: Compared: 373 files Dec 2 01:44:11 localhost dracut[1434]: Saved: 61.04 KiB Dec 2 01:44:11 localhost dracut[1434]: Duration: 0.017542 seconds Dec 2 01:44:11 localhost dracut[1434]: *** Hardlinking files done *** Dec 2 01:44:11 localhost dracut[1434]: Could not find 'strip'. Not stripping the initramfs. Dec 2 01:44:11 localhost dracut[1434]: *** Generating early-microcode cpio image *** Dec 2 01:44:11 localhost dracut[1434]: *** Constructing AuthenticAMD.bin *** Dec 2 01:44:11 localhost dracut[1434]: *** Store current command line parameters *** Dec 2 01:44:11 localhost dracut[1434]: Stored kernel commandline: Dec 2 01:44:11 localhost dracut[1434]: No dracut internal kernel commandline stored in the initramfs Dec 2 01:44:11 localhost dracut[1434]: *** Install squash loader *** Dec 2 01:44:11 localhost dracut[1434]: *** Squashing the files inside the initramfs *** Dec 2 01:44:11 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:44:12 localhost dracut[1434]: *** Squashing the files inside the initramfs done *** Dec 2 01:44:12 localhost dracut[1434]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Dec 2 01:44:13 localhost dracut[1434]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Dec 2 01:44:14 localhost kdumpctl[1135]: kdump: kexec: loaded kdump kernel Dec 2 01:44:14 localhost kdumpctl[1135]: kdump: Starting kdump: [OK] Dec 2 01:44:14 localhost systemd[1]: Finished Crash recovery kernel arming. Dec 2 01:44:14 localhost systemd[1]: Startup finished in 1.246s (kernel) + 1.963s (initrd) + 15.176s (userspace) = 18.386s. Dec 2 01:44:31 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 01:44:34 localhost sshd[4173]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:34 localhost systemd[1]: Created slice User Slice of UID 1000. Dec 2 01:44:34 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Dec 2 01:44:34 localhost systemd-logind[760]: New session 1 of user zuul. Dec 2 01:44:34 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Dec 2 01:44:34 localhost systemd[1]: Starting User Manager for UID 1000... Dec 2 01:44:35 localhost systemd[4177]: Queued start job for default target Main User Target. Dec 2 01:44:35 localhost systemd[4177]: Created slice User Application Slice. Dec 2 01:44:35 localhost systemd[4177]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 01:44:35 localhost systemd[4177]: Started Daily Cleanup of User's Temporary Directories. Dec 2 01:44:35 localhost systemd[4177]: Reached target Paths. Dec 2 01:44:35 localhost systemd[4177]: Reached target Timers. Dec 2 01:44:35 localhost systemd[4177]: Starting D-Bus User Message Bus Socket... Dec 2 01:44:35 localhost systemd[4177]: Starting Create User's Volatile Files and Directories... Dec 2 01:44:35 localhost systemd[4177]: Finished Create User's Volatile Files and Directories. Dec 2 01:44:35 localhost systemd[4177]: Listening on D-Bus User Message Bus Socket. Dec 2 01:44:35 localhost systemd[4177]: Reached target Sockets. Dec 2 01:44:35 localhost systemd[4177]: Reached target Basic System. Dec 2 01:44:35 localhost systemd[4177]: Reached target Main User Target. Dec 2 01:44:35 localhost systemd[4177]: Startup finished in 149ms. Dec 2 01:44:35 localhost systemd[1]: Started User Manager for UID 1000. Dec 2 01:44:35 localhost systemd[1]: Started Session 1 of User zuul. Dec 2 01:44:35 localhost python3[4229]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:45 localhost python3[4248]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:51 localhost python3[4301]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:52 localhost python3[4331]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Dec 2 01:44:55 localhost python3[4347]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:44:55 localhost python3[4361]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:44:57 localhost python3[4420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:44:57 localhost python3[4461]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657896.8774223-393-148421869851076/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:44:58 localhost python3[4534]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:44:59 localhost python3[4575]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657898.673202-497-255877473705326/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:01 localhost python3[4603]: ansible-ping Invoked with data=pong Dec 2 01:45:03 localhost python3[4617]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:45:06 localhost python3[4670]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Dec 2 01:45:09 localhost python3[4692]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:09 localhost python3[4706]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:09 localhost python3[4720]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:10 localhost python3[4734]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:10 localhost python3[4748]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:11 localhost python3[4762]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:12 localhost chronyd[766]: Selected source 162.159.200.1 (2.rhel.pool.ntp.org) Dec 2 01:45:13 localhost python3[4778]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:15 localhost python3[4826]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:15 localhost python3[4869]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657914.9519687-103-42547496934856/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:23 localhost python3[4898]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4954]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4968]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[4996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5024]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:26 localhost python3[5038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5052]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5066]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5080]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5108]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5122]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5136]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5150]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5164]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5178]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5206]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:30 localhost python3[5220]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:30 localhost python3[5234]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:30 localhost python3[5248]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:31 localhost python3[5264]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 2 01:45:31 localhost systemd[1]: Starting Time & Date Service... Dec 2 01:45:31 localhost systemd[1]: Started Time & Date Service. Dec 2 01:45:31 localhost systemd-timedated[5266]: Changed time zone to 'UTC' (UTC). Dec 2 01:45:32 localhost python3[5285]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:33 localhost python3[5331]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:34 localhost python3[5372]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764657933.5853722-502-139024217540485/source _original_basename=tmpxaia43oi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:35 localhost python3[5432]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:35 localhost python3[5473]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657935.0998247-588-212760711835856/source _original_basename=tmpl0uvbdp2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:37 localhost python3[5535]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:38 localhost python3[5578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657937.365522-732-222610062339955/source _original_basename=tmp8166pp9v follow=False checksum=01954034105cdb65b42722894a5c1036808c70c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:39 localhost python3[5606]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:39 localhost python3[5622]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:40 localhost python3[5672]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:40 localhost python3[5715]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657940.2812839-857-102749255966780/source _original_basename=tmpjz7i88_7 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:42 localhost python3[5746]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2304-36f4-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:43 localhost python3[5764]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2304-36f4-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Dec 2 01:45:45 localhost python3[5782]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:46:01 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 01:46:04 localhost python3[5801]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:46:54 localhost systemd[4177]: Starting Mark boot as successful... Dec 2 01:46:54 localhost systemd[4177]: Finished Mark boot as successful. Dec 2 01:47:04 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Dec 2 01:47:22 localhost chronyd[766]: Selected source 174.138.193.90 (2.rhel.pool.ntp.org) Dec 2 01:48:01 localhost systemd[1]: Unmounting EFI System Partition Automount... Dec 2 01:48:01 localhost systemd[1]: efi.mount: Deactivated successfully. Dec 2 01:48:01 localhost systemd[1]: Unmounted EFI System Partition Automount. Dec 2 01:49:54 localhost systemd[4177]: Created slice User Background Tasks Slice. Dec 2 01:49:54 localhost systemd[4177]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 01:49:54 localhost systemd[4177]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Dec 2 01:50:08 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Dec 2 01:50:08 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2320] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 2 01:50:08 localhost systemd-udevd[5809]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2454] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2485] settings: (eth1): created default wired connection 'Wired connection 1' Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2490] device (eth1): carrier: link connected Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2493] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2499] policy: auto-activating connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6) Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2505] device (eth1): Activation: starting connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6) Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2507] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2511] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2518] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 2 01:50:08 localhost NetworkManager[790]: [1764658208.2523] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:50:09 localhost sshd[5813]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:50:09 localhost systemd-logind[760]: New session 3 of user zuul. Dec 2 01:50:09 localhost systemd[1]: Started Session 3 of User zuul. Dec 2 01:50:09 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Dec 2 01:50:09 localhost python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8e68-9bb8-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:50:22 localhost python3[5880]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:50:22 localhost python3[5923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764658222.3319945-537-55041512328690/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e393cafdfd30e64ab7d980887ec656a764e51bf5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:50:23 localhost python3[5953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 01:50:23 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Dec 2 01:50:23 localhost systemd[1]: Stopped Network Manager Wait Online. Dec 2 01:50:23 localhost systemd[1]: Stopping Network Manager Wait Online... Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6176] caught SIGTERM, shutting down normally. Dec 2 01:50:23 localhost systemd[1]: Stopping Network Manager... Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6306] dhcp4 (eth0): canceled DHCP transaction Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6306] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6306] dhcp4 (eth0): state changed no lease Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6312] manager: NetworkManager state is now CONNECTING Dec 2 01:50:23 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6425] dhcp4 (eth1): canceled DHCP transaction Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6427] dhcp4 (eth1): state changed no lease Dec 2 01:50:23 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:50:23 localhost NetworkManager[790]: [1764658223.6511] exiting (success) Dec 2 01:50:23 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Dec 2 01:50:23 localhost systemd[1]: Stopped Network Manager. Dec 2 01:50:23 localhost systemd[1]: NetworkManager.service: Consumed 2.529s CPU time. Dec 2 01:50:23 localhost systemd[1]: Starting Network Manager... Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7036] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:9c01ca59-fcb0-40d3-99c4-2690b78cc18a) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7039] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7064] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 2 01:50:23 localhost systemd[1]: Started Network Manager. Dec 2 01:50:23 localhost systemd[1]: Starting Network Manager Wait Online... Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7142] manager[0x5653ca5a9090]: monitoring kernel firmware directory '/lib/firmware'. Dec 2 01:50:23 localhost systemd[1]: Starting Hostname Service... Dec 2 01:50:23 localhost systemd[1]: Started Hostname Service. Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7988] hostname: hostname: using hostnamed Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7988] hostname: static hostname changed from (none) to "np0005541914.novalocal" Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7992] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7997] manager[0x5653ca5a9090]: rfkill: Wi-Fi hardware radio set enabled Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.7997] manager[0x5653ca5a9090]: rfkill: WWAN hardware radio set enabled Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8022] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8022] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8023] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8023] manager: Networking is enabled by state file Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8031] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8032] settings: Loaded settings plugin: keyfile (internal) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8079] dhcp: init: Using DHCP client 'internal' Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8083] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8092] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8102] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8119] device (lo): Activation: starting connection 'lo' (7d726e81-3c90-4757-9293-46e316e2c15c) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8129] device (eth0): carrier: link connected Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8135] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8143] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8144] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8152] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8167] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8174] device (eth1): carrier: link connected Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8180] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8187] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6) (indicated) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8188] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8196] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8208] device (eth1): Activation: starting connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8249] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8255] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8262] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8268] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8273] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8277] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8281] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8285] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8295] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8301] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8316] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8321] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8374] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8378] dhcp4 (eth0): state changed new lease, address=38.102.83.204 Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8387] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8394] device (lo): Activation: successful, device activated. Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8402] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8507] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8554] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8558] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8563] manager: NetworkManager state is now CONNECTED_SITE Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8568] device (eth0): Activation: successful, device activated. Dec 2 01:50:23 localhost NetworkManager[5967]: [1764658223.8574] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 2 01:50:24 localhost python3[6015]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8e68-9bb8-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:50:33 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:50:53 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 01:51:08 localhost NetworkManager[5967]: [1764658268.7619] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 2 01:51:08 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:51:08 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:51:08 localhost NetworkManager[5967]: [1764658268.7831] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 2 01:51:08 localhost NetworkManager[5967]: [1764658268.7836] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 2 01:51:08 localhost NetworkManager[5967]: [1764658268.7846] device (eth1): Activation: successful, device activated. Dec 2 01:51:08 localhost NetworkManager[5967]: [1764658268.7856] manager: startup complete Dec 2 01:51:08 localhost systemd[1]: Finished Network Manager Wait Online. Dec 2 01:51:18 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:51:24 localhost systemd[1]: session-3.scope: Deactivated successfully. Dec 2 01:51:24 localhost systemd[1]: session-3.scope: Consumed 1.401s CPU time. Dec 2 01:51:24 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Dec 2 01:51:24 localhost systemd-logind[760]: Removed session 3. Dec 2 01:51:42 localhost sshd[6055]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:51:42 localhost systemd-logind[760]: New session 4 of user zuul. Dec 2 01:51:42 localhost systemd[1]: Started Session 4 of User zuul. Dec 2 01:51:42 localhost python3[6106]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:51:43 localhost python3[6149]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658302.6624026-628-210851161158306/source _original_basename=tmpreutzs0i follow=False checksum=c2b23ffe44719bb1642f7b68b2bf34d320a2a721 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:51:45 localhost systemd[1]: session-4.scope: Deactivated successfully. Dec 2 01:51:45 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Dec 2 01:51:45 localhost systemd-logind[760]: Removed session 4. Dec 2 01:55:54 localhost systemd[1]: Starting dnf makecache... Dec 2 01:55:55 localhost dnf[6165]: Failed determining last makecache time. Dec 2 01:55:55 localhost dnf[6165]: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d". Dec 2 01:55:55 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 01:55:55 localhost systemd[1]: Finished dnf makecache. Dec 2 01:57:38 localhost sshd[6169]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:57:39 localhost systemd-logind[760]: New session 5 of user zuul. Dec 2 01:57:39 localhost systemd[1]: Started Session 5 of User zuul. Dec 2 01:57:39 localhost python3[6188]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d02-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:40 localhost python3[6207]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:41 localhost python3[6223]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:41 localhost python3[6239]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:41 localhost python3[6255]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:42 localhost python3[6271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:43 localhost python3[6319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:57:43 localhost python3[6362]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658663.2883444-648-40554091009218/source _original_basename=tmp0hal718q follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:45 localhost python3[6392]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 01:57:45 localhost systemd[1]: Reloading. Dec 2 01:57:45 localhost systemd-rc-local-generator[6409]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 01:57:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 01:57:47 localhost python3[6438]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Dec 2 01:57:48 localhost python3[6454]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:48 localhost python3[6472]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:49 localhost python3[6490]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:49 localhost python3[6508]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:58:00 localhost python3[6526]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d09-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:58:01 localhost python3[6546]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 01:58:04 localhost systemd[1]: session-5.scope: Deactivated successfully. Dec 2 01:58:04 localhost systemd[1]: session-5.scope: Consumed 4.033s CPU time. Dec 2 01:58:04 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Dec 2 01:58:04 localhost systemd-logind[760]: Removed session 5. Dec 2 01:59:23 localhost sshd[6553]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:59:23 localhost systemd[1]: Starting Cleanup of Temporary Directories... Dec 2 01:59:23 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Dec 2 01:59:23 localhost systemd[1]: Finished Cleanup of Temporary Directories. Dec 2 01:59:23 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Dec 2 01:59:23 localhost systemd-logind[760]: New session 6 of user zuul. Dec 2 01:59:23 localhost systemd[1]: Started Session 6 of User zuul. Dec 2 01:59:24 localhost systemd[1]: Starting RHSM dbus service... Dec 2 01:59:24 localhost systemd[1]: Started RHSM dbus service. Dec 2 01:59:24 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:24 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:24 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:24 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:26 localhost rhsm-service[6579]: INFO [subscription_manager.managerlib:90] Consumer created: np0005541914.novalocal (5e8bb4be-b98c-46c0-ac7b-5189dfb48508) Dec 2 01:59:26 localhost subscription-manager[6579]: Registered system with identity: 5e8bb4be-b98c-46c0-ac7b-5189dfb48508 Dec 2 01:59:26 localhost rhsm-service[6579]: INFO [subscription_manager.entcertlib:131] certs updated: Dec 2 01:59:26 localhost rhsm-service[6579]: Total updates: 1 Dec 2 01:59:26 localhost rhsm-service[6579]: Found (local) serial# [] Dec 2 01:59:26 localhost rhsm-service[6579]: Expected (UEP) serial# [958660430422246327] Dec 2 01:59:26 localhost rhsm-service[6579]: Added (new) Dec 2 01:59:26 localhost rhsm-service[6579]: [sn:958660430422246327 ( Content Access,) @ /etc/pki/entitlement/958660430422246327.pem] Dec 2 01:59:26 localhost rhsm-service[6579]: Deleted (rogue): Dec 2 01:59:26 localhost rhsm-service[6579]: Dec 2 01:59:26 localhost subscription-manager[6579]: Added subscription for 'Content Access' contract 'None' Dec 2 01:59:26 localhost subscription-manager[6579]: Added subscription for product ' Content Access' Dec 2 01:59:28 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:28 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:31 localhost python3[6670]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-0809-2eed-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:00:24 localhost python3[6689]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:01:08 localhost setsebool[6780]: The virt_use_nfs policy boolean was changed to 1 by root Dec 2 02:01:08 localhost setsebool[6780]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Dec 2 02:01:16 localhost kernel: SELinux: Converting 410 SID table entries... Dec 2 02:01:16 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:01:16 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:01:16 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:01:16 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:01:16 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:01:16 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:01:16 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:01:29 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 2 02:01:29 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:01:29 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:01:29 localhost systemd[1]: Reloading. Dec 2 02:01:29 localhost systemd-rc-local-generator[7651]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:01:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:01:29 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:01:31 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:01:31 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:01:33 localhost podman[13009]: 2025-12-02 07:01:33.676958984 +0000 UTC m=+0.103677637 system refresh Dec 2 02:01:34 localhost systemd[4177]: Starting D-Bus User Message Bus... Dec 2 02:01:34 localhost dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 2 02:01:34 localhost dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 2 02:01:34 localhost systemd[4177]: Started D-Bus User Message Bus. Dec 2 02:01:34 localhost journal[14516]: Ready Dec 2 02:01:34 localhost systemd[4177]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 2 02:01:34 localhost systemd[4177]: Created slice Slice /user. Dec 2 02:01:34 localhost systemd[4177]: podman-14343.scope: unit configures an IP firewall, but not running as root. Dec 2 02:01:34 localhost systemd[4177]: (This warning is only shown for the first unit using IP firewalling.) Dec 2 02:01:34 localhost systemd[4177]: Started podman-14343.scope. Dec 2 02:01:34 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:01:34 localhost systemd[4177]: Started podman-pause-30e316fa.scope. Dec 2 02:01:35 localhost systemd[1]: session-6.scope: Deactivated successfully. Dec 2 02:01:35 localhost systemd[1]: session-6.scope: Consumed 51.879s CPU time. Dec 2 02:01:35 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Dec 2 02:01:35 localhost systemd-logind[760]: Removed session 6. Dec 2 02:01:37 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:01:37 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:01:37 localhost systemd[1]: man-db-cache-update.service: Consumed 9.832s CPU time. Dec 2 02:01:37 localhost systemd[1]: run-r0d21e8c1f6f54a6eb748a4c9f9ac015d.service: Deactivated successfully. Dec 2 02:01:52 localhost sshd[18439]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:52 localhost sshd[18436]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:52 localhost sshd[18437]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:52 localhost sshd[18438]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:52 localhost sshd[18435]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:56 localhost sshd[18445]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:56 localhost systemd-logind[760]: New session 7 of user zuul. Dec 2 02:01:56 localhost systemd[1]: Started Session 7 of User zuul. Dec 2 02:01:57 localhost python3[18462]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:01:57 localhost python3[18478]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:01:59 localhost systemd[1]: session-7.scope: Deactivated successfully. Dec 2 02:01:59 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Dec 2 02:01:59 localhost systemd-logind[760]: Removed session 7. Dec 2 02:03:27 localhost sshd[18481]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:03:27 localhost systemd-logind[760]: New session 8 of user zuul. Dec 2 02:03:27 localhost systemd[1]: Started Session 8 of User zuul. Dec 2 02:03:28 localhost python3[18500]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:03:29 localhost python3[18516]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:03:30 localhost python3[18566]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:31 localhost python3[18609]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659010.694755-142-256007831715982/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:32 localhost python3[18671]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:33 localhost python3[18714]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659012.4336061-229-92085273678354/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:35 localhost python3[18744]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:36 localhost python3[18790]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:36 localhost python3[18806]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmp54dfnajq recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:37 localhost python3[18866]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:37 localhost python3[18882]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpynf6tx62 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:39 localhost python3[18942]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:39 localhost python3[18958]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpanf1kntl recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:40 localhost systemd[1]: session-8.scope: Deactivated successfully. Dec 2 02:03:40 localhost systemd[1]: session-8.scope: Consumed 3.766s CPU time. Dec 2 02:03:40 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Dec 2 02:03:40 localhost systemd-logind[760]: Removed session 8. Dec 2 02:05:53 localhost sshd[18974]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:05:53 localhost systemd-logind[760]: New session 9 of user zuul. Dec 2 02:05:53 localhost systemd[1]: Started Session 9 of User zuul. Dec 2 02:05:54 localhost python3[19020]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:10:53 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Dec 2 02:10:53 localhost systemd[1]: session-9.scope: Deactivated successfully. Dec 2 02:10:53 localhost systemd-logind[760]: Removed session 9. Dec 2 02:13:25 localhost sshd[19025]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:13:25 localhost sshd[19026]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:14:43 localhost sshd[19028]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:16:42 localhost sshd[19029]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:18:16 localhost sshd[19034]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:18:16 localhost systemd-logind[760]: New session 10 of user zuul. Dec 2 02:18:16 localhost systemd[1]: Started Session 10 of User zuul. Dec 2 02:18:16 localhost python3[19051]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:18:19 localhost python3[19071]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:18:31 localhost sshd[19075]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:18:50 localhost python3[19092]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Dec 2 02:18:54 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:27 localhost python3[19248]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Dec 2 02:19:30 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:30 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:50 localhost python3[19449]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Dec 2 02:19:52 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:52 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:58 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:58 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:21 localhost python3[19784]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 2 02:20:25 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:25 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:30 localhost sshd[20029]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:20:30 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:30 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:57 localhost python3[20181]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 2 02:21:00 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:21:00 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:21:05 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:21:17 localhost python3[20518]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:21:45 localhost python3[20538]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:22:06 localhost kernel: SELinux: Converting 490 SID table entries... Dec 2 02:22:06 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:22:06 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:22:06 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:22:06 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:22:06 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:22:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:22:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:22:06 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Dec 2 02:22:06 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Dec 2 02:22:09 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:22:09 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:22:09 localhost systemd[1]: Reloading. Dec 2 02:22:09 localhost systemd-rc-local-generator[21180]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:22:09 localhost systemd-sysv-generator[21185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:22:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:22:09 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:22:10 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:22:10 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:22:10 localhost systemd[1]: run-r942c90c646fe4ff28a4562496897aeac.service: Deactivated successfully. Dec 2 02:22:11 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:22:11 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:22:26 localhost python3[21769]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:22:28 localhost sshd[21773]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:22:45 localhost python3[21791]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:22:47 localhost python3[21839]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:22:47 localhost python3[21882]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764660166.8141565-334-92596069481338/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:22:49 localhost python3[21912]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:49 localhost systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Dec 2 02:22:49 localhost systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:22:49 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:22:49 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:22:49 localhost python3[21933]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:49 localhost python3[21953]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:50 localhost python3[21973]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:50 localhost python3[21993]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:53 localhost python3[22013]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:22:53 localhost systemd[1]: Starting LSB: Bring up/down networking... Dec 2 02:22:53 localhost network[22016]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 02:22:53 localhost network[22027]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 02:22:53 localhost network[22016]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:53 localhost network[22028]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:53 localhost network[22016]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Dec 2 02:22:53 localhost network[22029]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 02:22:53 localhost NetworkManager[5967]: [1764660173.3911] audit: op="connections-reload" pid=22057 uid=0 result="success" Dec 2 02:22:53 localhost network[22016]: Bringing up loopback interface: [ OK ] Dec 2 02:22:53 localhost NetworkManager[5967]: [1764660173.5960] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22145 uid=0 result="success" Dec 2 02:22:53 localhost network[22016]: Bringing up interface eth0: [ OK ] Dec 2 02:22:53 localhost systemd[1]: Started LSB: Bring up/down networking. Dec 2 02:22:54 localhost python3[22187]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:22:54 localhost systemd[1]: Starting Open vSwitch Database Unit... Dec 2 02:22:54 localhost chown[22191]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Dec 2 02:22:54 localhost ovs-ctl[22196]: /etc/openvswitch/conf.db does not exist ... (warning). Dec 2 02:22:54 localhost ovs-ctl[22196]: Creating empty database /etc/openvswitch/conf.db [ OK ] Dec 2 02:22:54 localhost ovs-ctl[22196]: Starting ovsdb-server [ OK ] Dec 2 02:22:54 localhost ovs-vsctl[22245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Dec 2 02:22:54 localhost ovs-vsctl[22265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"515e0717-8baa-40e6-ac30-5fb148626504\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Dec 2 02:22:54 localhost ovs-ctl[22196]: Configuring Open vSwitch system IDs [ OK ] Dec 2 02:22:54 localhost ovs-ctl[22196]: Enabling remote OVSDB managers [ OK ] Dec 2 02:22:54 localhost systemd[1]: Started Open vSwitch Database Unit. Dec 2 02:22:54 localhost ovs-vsctl[22271]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541914.novalocal Dec 2 02:22:54 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Dec 2 02:22:54 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Dec 2 02:22:54 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Dec 2 02:22:54 localhost kernel: openvswitch: Open vSwitch switching datapath Dec 2 02:22:54 localhost ovs-ctl[22315]: Inserting openvswitch module [ OK ] Dec 2 02:22:54 localhost ovs-ctl[22284]: Starting ovs-vswitchd [ OK ] Dec 2 02:22:54 localhost ovs-vsctl[22333]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541914.novalocal Dec 2 02:22:54 localhost ovs-ctl[22284]: Enabling remote OVSDB managers [ OK ] Dec 2 02:22:54 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Dec 2 02:22:54 localhost systemd[1]: Starting Open vSwitch... Dec 2 02:22:54 localhost systemd[1]: Finished Open vSwitch. Dec 2 02:23:25 localhost python3[22351]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.6463] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22510 uid=0 result="success" Dec 2 02:23:26 localhost ifup[22511]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:26 localhost ifup[22512]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:26 localhost ifup[22513]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.6676] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22519 uid=0 result="success" Dec 2 02:23:26 localhost ovs-vsctl[22521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:08:72:ba -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Dec 2 02:23:26 localhost kernel: device ovs-system entered promiscuous mode Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.7213] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Dec 2 02:23:26 localhost systemd-udevd[22522]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:26 localhost kernel: Timeout policy base is empty Dec 2 02:23:26 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Dec 2 02:23:26 localhost kernel: device br-ex entered promiscuous mode Dec 2 02:23:26 localhost systemd-udevd[22533]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.7623] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.7836] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22547 uid=0 result="success" Dec 2 02:23:26 localhost NetworkManager[5967]: [1764660206.8025] device (br-ex): carrier: link connected Dec 2 02:23:29 localhost NetworkManager[5967]: [1764660209.8580] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22576 uid=0 result="success" Dec 2 02:23:29 localhost NetworkManager[5967]: [1764660209.9053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22591 uid=0 result="success" Dec 2 02:23:29 localhost NET[22616]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0004] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0106] dhcp4 (eth1): canceled DHCP transaction Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0106] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0106] dhcp4 (eth1): state changed no lease Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22625 uid=0 result="success" Dec 2 02:23:30 localhost ifup[22626]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:30 localhost ifup[22627]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:30 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 02:23:30 localhost ifup[22629]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:30 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.0517] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22642 uid=0 result="success" Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.1018] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22653 uid=0 result="success" Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.1098] device (eth1): carrier: link connected Dec 2 02:23:30 localhost NetworkManager[5967]: [1764660210.1288] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22662 uid=0 result="success" Dec 2 02:23:30 localhost ipv6_wait_tentative[22674]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 2 02:23:31 localhost ipv6_wait_tentative[22679]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.2007] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22688 uid=0 result="success" Dec 2 02:23:32 localhost ovs-vsctl[22703]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Dec 2 02:23:32 localhost kernel: device eth1 entered promiscuous mode Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.2654] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22711 uid=0 result="success" Dec 2 02:23:32 localhost ifup[22712]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:32 localhost ifup[22713]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:32 localhost ifup[22714]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.2908] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22720 uid=0 result="success" Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.3309] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22730 uid=0 result="success" Dec 2 02:23:32 localhost ifup[22731]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:32 localhost ifup[22732]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:32 localhost ifup[22733]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.3610] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22739 uid=0 result="success" Dec 2 02:23:32 localhost ovs-vsctl[22742]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 2 02:23:32 localhost kernel: device vlan20 entered promiscuous mode Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.3987] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Dec 2 02:23:32 localhost systemd-udevd[22744]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.4222] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22753 uid=0 result="success" Dec 2 02:23:32 localhost NetworkManager[5967]: [1764660212.4410] device (vlan20): carrier: link connected Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.4908] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22782 uid=0 result="success" Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.5369] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22797 uid=0 result="success" Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.5967] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22818 uid=0 result="success" Dec 2 02:23:35 localhost ifup[22819]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:35 localhost ifup[22820]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:35 localhost ifup[22821]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.6284] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22827 uid=0 result="success" Dec 2 02:23:35 localhost ovs-vsctl[22830]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 2 02:23:35 localhost kernel: device vlan22 entered promiscuous mode Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.7001] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Dec 2 02:23:35 localhost systemd-udevd[22833]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.7261] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22842 uid=0 result="success" Dec 2 02:23:35 localhost NetworkManager[5967]: [1764660215.7466] device (vlan22): carrier: link connected Dec 2 02:23:38 localhost NetworkManager[5967]: [1764660218.8005] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22872 uid=0 result="success" Dec 2 02:23:38 localhost NetworkManager[5967]: [1764660218.8498] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22887 uid=0 result="success" Dec 2 02:23:38 localhost NetworkManager[5967]: [1764660218.9192] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22908 uid=0 result="success" Dec 2 02:23:38 localhost ifup[22909]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:38 localhost ifup[22910]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:38 localhost ifup[22911]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:38 localhost NetworkManager[5967]: [1764660218.9504] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22917 uid=0 result="success" Dec 2 02:23:38 localhost ovs-vsctl[22920]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 2 02:23:38 localhost kernel: device vlan44 entered promiscuous mode Dec 2 02:23:38 localhost systemd-udevd[22922]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:38 localhost NetworkManager[5967]: [1764660218.9892] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Dec 2 02:23:39 localhost NetworkManager[5967]: [1764660219.0147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22932 uid=0 result="success" Dec 2 02:23:39 localhost NetworkManager[5967]: [1764660219.0374] device (vlan44): carrier: link connected Dec 2 02:23:40 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.0903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22962 uid=0 result="success" Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.1372] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22977 uid=0 result="success" Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.1962] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22998 uid=0 result="success" Dec 2 02:23:42 localhost ifup[22999]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:42 localhost ifup[23000]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:42 localhost ifup[23001]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.2310] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23007 uid=0 result="success" Dec 2 02:23:42 localhost ovs-vsctl[23010]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 2 02:23:42 localhost kernel: device vlan23 entered promiscuous mode Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.2728] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Dec 2 02:23:42 localhost systemd-udevd[23012]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.2999] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23022 uid=0 result="success" Dec 2 02:23:42 localhost NetworkManager[5967]: [1764660222.3207] device (vlan23): carrier: link connected Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.3846] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23052 uid=0 result="success" Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.4283] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23067 uid=0 result="success" Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.4959] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23088 uid=0 result="success" Dec 2 02:23:45 localhost ifup[23089]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:45 localhost ifup[23090]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:45 localhost ifup[23091]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.5387] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23097 uid=0 result="success" Dec 2 02:23:45 localhost ovs-vsctl[23100]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 2 02:23:45 localhost kernel: device vlan21 entered promiscuous mode Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.5820] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Dec 2 02:23:45 localhost systemd-udevd[23103]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.6116] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23112 uid=0 result="success" Dec 2 02:23:45 localhost NetworkManager[5967]: [1764660225.6328] device (vlan21): carrier: link connected Dec 2 02:23:48 localhost NetworkManager[5967]: [1764660228.7275] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23142 uid=0 result="success" Dec 2 02:23:48 localhost NetworkManager[5967]: [1764660228.7766] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23157 uid=0 result="success" Dec 2 02:23:48 localhost NetworkManager[5967]: [1764660228.8379] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23178 uid=0 result="success" Dec 2 02:23:48 localhost ifup[23179]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:48 localhost ifup[23180]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:48 localhost ifup[23181]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:48 localhost NetworkManager[5967]: [1764660228.8732] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23187 uid=0 result="success" Dec 2 02:23:48 localhost ovs-vsctl[23190]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 2 02:23:48 localhost NetworkManager[5967]: [1764660228.9761] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23197 uid=0 result="success" Dec 2 02:23:50 localhost NetworkManager[5967]: [1764660230.0380] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23224 uid=0 result="success" Dec 2 02:23:50 localhost NetworkManager[5967]: [1764660230.0834] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23239 uid=0 result="success" Dec 2 02:23:50 localhost NetworkManager[5967]: [1764660230.1464] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23260 uid=0 result="success" Dec 2 02:23:50 localhost ifup[23261]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:50 localhost ifup[23262]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:50 localhost ifup[23263]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:50 localhost NetworkManager[5967]: [1764660230.1763] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23269 uid=0 result="success" Dec 2 02:23:50 localhost ovs-vsctl[23272]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 2 02:23:50 localhost NetworkManager[5967]: [1764660230.2343] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23279 uid=0 result="success" Dec 2 02:23:51 localhost NetworkManager[5967]: [1764660231.3019] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23307 uid=0 result="success" Dec 2 02:23:51 localhost NetworkManager[5967]: [1764660231.3462] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23322 uid=0 result="success" Dec 2 02:23:51 localhost NetworkManager[5967]: [1764660231.4064] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23343 uid=0 result="success" Dec 2 02:23:51 localhost ifup[23344]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:51 localhost ifup[23345]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:51 localhost ifup[23346]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:51 localhost NetworkManager[5967]: [1764660231.4371] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23352 uid=0 result="success" Dec 2 02:23:51 localhost ovs-vsctl[23355]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 2 02:23:51 localhost NetworkManager[5967]: [1764660231.5266] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23362 uid=0 result="success" Dec 2 02:23:52 localhost NetworkManager[5967]: [1764660232.5928] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23390 uid=0 result="success" Dec 2 02:23:52 localhost NetworkManager[5967]: [1764660232.6431] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23405 uid=0 result="success" Dec 2 02:23:52 localhost NetworkManager[5967]: [1764660232.6991] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23426 uid=0 result="success" Dec 2 02:23:52 localhost ifup[23427]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:52 localhost ifup[23428]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:52 localhost ifup[23429]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:52 localhost NetworkManager[5967]: [1764660232.7289] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23435 uid=0 result="success" Dec 2 02:23:52 localhost ovs-vsctl[23438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 2 02:23:52 localhost NetworkManager[5967]: [1764660232.7834] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23445 uid=0 result="success" Dec 2 02:23:53 localhost NetworkManager[5967]: [1764660233.8401] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23473 uid=0 result="success" Dec 2 02:23:53 localhost NetworkManager[5967]: [1764660233.8874] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23488 uid=0 result="success" Dec 2 02:23:53 localhost NetworkManager[5967]: [1764660233.9519] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23509 uid=0 result="success" Dec 2 02:23:53 localhost ifup[23510]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:53 localhost ifup[23511]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:53 localhost ifup[23512]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:53 localhost NetworkManager[5967]: [1764660233.9851] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23518 uid=0 result="success" Dec 2 02:23:54 localhost ovs-vsctl[23521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 2 02:23:54 localhost NetworkManager[5967]: [1764660234.0766] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23528 uid=0 result="success" Dec 2 02:23:55 localhost NetworkManager[5967]: [1764660235.1348] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23556 uid=0 result="success" Dec 2 02:23:55 localhost NetworkManager[5967]: [1764660235.1819] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23571 uid=0 result="success" Dec 2 02:24:18 localhost sshd[23590]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:24:19 localhost python3[23605]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:24 localhost python3[23624]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:24 localhost python3[23640]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:26 localhost python3[23654]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:26 localhost python3[23670]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:27 localhost python3[23684]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Dec 2 02:24:27 localhost python3[23699]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005541914.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:28 localhost python3[23719]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:28 localhost systemd[1]: Starting Hostname Service... Dec 2 02:24:28 localhost systemd[1]: Started Hostname Service. Dec 2 02:24:28 localhost systemd-hostnamed[23723]: Hostname set to (static) Dec 2 02:24:28 localhost NetworkManager[5967]: [1764660268.9065] hostname: static hostname changed from "np0005541914.novalocal" to "np0005541914.localdomain" Dec 2 02:24:28 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 02:24:28 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 02:24:29 localhost systemd[1]: session-10.scope: Deactivated successfully. Dec 2 02:24:29 localhost systemd[1]: session-10.scope: Consumed 1min 47.687s CPU time. Dec 2 02:24:29 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Dec 2 02:24:29 localhost systemd-logind[760]: Removed session 10. Dec 2 02:24:33 localhost sshd[23734]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:24:33 localhost systemd-logind[760]: New session 11 of user zuul. Dec 2 02:24:33 localhost systemd[1]: Started Session 11 of User zuul. Dec 2 02:24:33 localhost python3[23751]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 2 02:24:34 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Dec 2 02:24:34 localhost systemd[1]: session-11.scope: Deactivated successfully. Dec 2 02:24:34 localhost systemd-logind[760]: Removed session 11. Dec 2 02:24:38 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 02:24:58 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 02:25:28 localhost sshd[23755]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:25:28 localhost systemd-logind[760]: New session 12 of user zuul. Dec 2 02:25:28 localhost systemd[1]: Started Session 12 of User zuul. Dec 2 02:25:28 localhost python3[23774]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:25:32 localhost systemd[1]: Reloading. Dec 2 02:25:32 localhost systemd-rc-local-generator[23819]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:32 localhost systemd-sysv-generator[23822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:32 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Dec 2 02:25:32 localhost systemd[1]: Reloading. Dec 2 02:25:32 localhost systemd-rc-local-generator[23855]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:32 localhost systemd-sysv-generator[23858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:33 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Dec 2 02:25:33 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Dec 2 02:25:33 localhost systemd[1]: Reloading. Dec 2 02:25:33 localhost systemd-sysv-generator[23899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:33 localhost systemd-rc-local-generator[23896]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:33 localhost systemd[1]: Listening on LVM2 poll daemon socket. Dec 2 02:25:33 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:25:33 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:25:33 localhost systemd[1]: Reloading. Dec 2 02:25:33 localhost systemd-sysv-generator[23959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:33 localhost systemd-rc-local-generator[23956]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:33 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:25:33 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:25:34 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:25:34 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:25:34 localhost systemd[1]: run-r35502efc4f0b46f59e35d5704b3f2b6e.service: Deactivated successfully. Dec 2 02:25:34 localhost systemd[1]: run-red15ff76a146406ba84677f30b2b9e6b.service: Deactivated successfully. Dec 2 02:26:11 localhost sshd[24546]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:26:34 localhost systemd[1]: session-12.scope: Deactivated successfully. Dec 2 02:26:34 localhost systemd[1]: session-12.scope: Consumed 4.658s CPU time. Dec 2 02:26:34 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Dec 2 02:26:34 localhost systemd-logind[760]: Removed session 12. Dec 2 02:27:35 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:28:10 localhost sshd[24727]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:30:03 localhost sshd[24729]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:32:03 localhost sshd[24732]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:34:11 localhost sshd[24736]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:36:13 localhost sshd[24740]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:38:13 localhost sshd[24743]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:40:15 localhost sshd[24747]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:40:45 localhost sshd[24749]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:41:46 localhost sshd[24751]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:41:57 localhost sshd[24753]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:13 localhost sshd[24754]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:20 localhost sshd[24756]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:20 localhost systemd-logind[760]: New session 13 of user zuul. Dec 2 02:42:20 localhost systemd[1]: Started Session 13 of User zuul. Dec 2 02:42:21 localhost python3[24804]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:42:23 localhost python3[24891]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:42:26 localhost python3[24908]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:42:26 localhost python3[24924]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:27 localhost kernel: loop: module loaded Dec 2 02:42:27 localhost kernel: loop3: detected capacity change from 0 to 14680064 Dec 2 02:42:27 localhost python3[24949]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:27 localhost lvm[24952]: PV /dev/loop3 not used. Dec 2 02:42:27 localhost lvm[24954]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:42:27 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Dec 2 02:42:27 localhost lvm[24963]: 1 logical volume(s) in volume group "ceph_vg0" now active Dec 2 02:42:27 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Dec 2 02:42:28 localhost python3[25011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:42:28 localhost python3[25054]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661348.0403972-53936-81282343706678/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:29 localhost python3[25084]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:42:30 localhost systemd[1]: Reloading. Dec 2 02:42:30 localhost systemd-rc-local-generator[25114]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:42:30 localhost systemd-sysv-generator[25117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:42:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:42:30 localhost systemd[1]: Starting Ceph OSD losetup... Dec 2 02:42:30 localhost bash[25125]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img) Dec 2 02:42:31 localhost systemd[1]: Finished Ceph OSD losetup. Dec 2 02:42:31 localhost lvm[25127]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:42:31 localhost lvm[25127]: VG ceph_vg0 finished Dec 2 02:42:31 localhost python3[25144]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:42:34 localhost python3[25161]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:42:35 localhost python3[25177]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:35 localhost kernel: loop4: detected capacity change from 0 to 14680064 Dec 2 02:42:36 localhost python3[25199]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:36 localhost lvm[25202]: PV /dev/loop4 not used. Dec 2 02:42:36 localhost lvm[25204]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:42:36 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Dec 2 02:42:36 localhost lvm[25213]: 1 logical volume(s) in volume group "ceph_vg1" now active Dec 2 02:42:36 localhost lvm[25215]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:42:36 localhost lvm[25215]: VG ceph_vg1 finished Dec 2 02:42:36 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Dec 2 02:42:36 localhost python3[25264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:42:37 localhost python3[25307]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661356.5437486-54106-3564664512598/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:37 localhost python3[25337]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:42:37 localhost systemd[1]: Reloading. Dec 2 02:42:38 localhost systemd-rc-local-generator[25364]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:42:38 localhost systemd-sysv-generator[25371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:42:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:42:38 localhost systemd[1]: Starting Ceph OSD losetup... Dec 2 02:42:38 localhost bash[25379]: /dev/loop4: [64516]:8606979 (/var/lib/ceph-osd-1.img) Dec 2 02:42:38 localhost systemd[1]: Finished Ceph OSD losetup. Dec 2 02:42:38 localhost lvm[25380]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:42:38 localhost lvm[25380]: VG ceph_vg1 finished Dec 2 02:42:45 localhost sshd[25381]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:46 localhost sshd[25412]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:46 localhost python3[25428]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:42:48 localhost python3[25448]: ansible-hostname Invoked with name=np0005541914.localdomain use=None Dec 2 02:42:48 localhost systemd[1]: Starting Hostname Service... Dec 2 02:42:48 localhost systemd[1]: Started Hostname Service. Dec 2 02:42:55 localhost python3[25471]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 2 02:42:56 localhost python3[25519]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.1a22a8zctmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:56 localhost python3[25549]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.1a22a8zctmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:57 localhost python3[25565]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.1a22a8zctmphosts insertbefore=BOF block=192.168.122.106 np0005541912.localdomain np0005541912#012192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane#012192.168.122.107 np0005541913.localdomain np0005541913#012192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane#012192.168.122.108 np0005541914.localdomain np0005541914#012192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane#012192.168.122.103 np0005541909.localdomain np0005541909#012192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane#012192.168.122.104 np0005541910.localdomain np0005541910#012192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane#012192.168.122.105 np0005541911.localdomain np0005541911#012192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:57 localhost python3[25581]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.1a22a8zctmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:58 localhost python3[25598]: ansible-file Invoked with path=/tmp/ansible.1a22a8zctmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:00 localhost python3[25614]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:01 localhost python3[25632]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:43:05 localhost python3[25681]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:43:06 localhost python3[25726]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661385.0833302-55052-279488438052816/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:07 localhost python3[25756]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:43:08 localhost python3[25774]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:43:08 localhost chronyd[766]: chronyd exiting Dec 2 02:43:08 localhost systemd[1]: Stopping NTP client/server... Dec 2 02:43:08 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 02:43:08 localhost systemd[1]: Stopped NTP client/server. Dec 2 02:43:08 localhost systemd[1]: chronyd.service: Consumed 129ms CPU time, read 1.9M from disk, written 0B to disk. Dec 2 02:43:08 localhost systemd[1]: Starting NTP client/server... Dec 2 02:43:08 localhost chronyd[25782]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 02:43:08 localhost chronyd[25782]: Frequency -30.274 +/- 0.056 ppm read from /var/lib/chrony/drift Dec 2 02:43:08 localhost chronyd[25782]: Loaded seccomp filter (level 2) Dec 2 02:43:08 localhost systemd[1]: Started NTP client/server. Dec 2 02:43:08 localhost python3[25831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:43:09 localhost python3[25874]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661388.6881104-55196-171937319089839/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:09 localhost python3[25904]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:43:09 localhost systemd[1]: Reloading. Dec 2 02:43:10 localhost systemd-rc-local-generator[25930]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:43:10 localhost systemd-sysv-generator[25933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:43:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:43:10 localhost systemd[1]: Reloading. Dec 2 02:43:10 localhost systemd-rc-local-generator[25969]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:43:10 localhost systemd-sysv-generator[25974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:43:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:43:10 localhost systemd[1]: Starting chronyd online sources service... Dec 2 02:43:10 localhost chronyc[25980]: 200 OK Dec 2 02:43:10 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 2 02:43:10 localhost systemd[1]: Finished chronyd online sources service. Dec 2 02:43:11 localhost python3[25996]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:11 localhost chronyd[25782]: System clock was stepped by -0.000000 seconds Dec 2 02:43:11 localhost python3[26013]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:12 localhost chronyd[25782]: Selected source 51.222.12.92 (pool.ntp.org) Dec 2 02:43:18 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 02:43:22 localhost python3[26032]: ansible-timezone Invoked with name=UTC hwclock=None Dec 2 02:43:22 localhost systemd[1]: Starting Time & Date Service... Dec 2 02:43:22 localhost systemd[1]: Started Time & Date Service. Dec 2 02:43:22 localhost sshd[26053]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:43:23 localhost python3[26052]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:43:23 localhost chronyd[25782]: chronyd exiting Dec 2 02:43:23 localhost systemd[1]: Stopping NTP client/server... Dec 2 02:43:23 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 02:43:23 localhost systemd[1]: Stopped NTP client/server. Dec 2 02:43:23 localhost systemd[1]: Starting NTP client/server... Dec 2 02:43:23 localhost chronyd[26062]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 02:43:23 localhost chronyd[26062]: Frequency -30.274 +/- 0.061 ppm read from /var/lib/chrony/drift Dec 2 02:43:23 localhost chronyd[26062]: Loaded seccomp filter (level 2) Dec 2 02:43:23 localhost systemd[1]: Started NTP client/server. Dec 2 02:43:27 localhost chronyd[26062]: Selected source 51.222.12.92 (pool.ntp.org) Dec 2 02:43:36 localhost sshd[26064]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:43:52 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 02:44:09 localhost sshd[26261]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:44:43 localhost sshd[26263]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:44:54 localhost sshd[26265]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:20 localhost sshd[26267]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:29 localhost sshd[26268]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:29 localhost systemd-logind[760]: New session 14 of user ceph-admin. Dec 2 02:45:29 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 2 02:45:29 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 2 02:45:29 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 2 02:45:29 localhost systemd[1]: Starting User Manager for UID 1002... Dec 2 02:45:29 localhost sshd[26286]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:29 localhost systemd[26272]: Queued start job for default target Main User Target. Dec 2 02:45:29 localhost systemd[26272]: Created slice User Application Slice. Dec 2 02:45:29 localhost systemd[26272]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 02:45:29 localhost systemd[26272]: Started Daily Cleanup of User's Temporary Directories. Dec 2 02:45:29 localhost systemd[26272]: Reached target Paths. Dec 2 02:45:29 localhost systemd[26272]: Reached target Timers. Dec 2 02:45:29 localhost systemd[26272]: Starting D-Bus User Message Bus Socket... Dec 2 02:45:29 localhost systemd[26272]: Starting Create User's Volatile Files and Directories... Dec 2 02:45:29 localhost systemd[26272]: Listening on D-Bus User Message Bus Socket. Dec 2 02:45:29 localhost systemd[26272]: Reached target Sockets. Dec 2 02:45:29 localhost systemd[26272]: Finished Create User's Volatile Files and Directories. Dec 2 02:45:29 localhost systemd[26272]: Reached target Basic System. Dec 2 02:45:29 localhost systemd[26272]: Reached target Main User Target. Dec 2 02:45:29 localhost systemd[26272]: Startup finished in 122ms. Dec 2 02:45:29 localhost systemd[1]: Started User Manager for UID 1002. Dec 2 02:45:29 localhost systemd[1]: Started Session 14 of User ceph-admin. Dec 2 02:45:29 localhost systemd-logind[760]: New session 16 of user ceph-admin. Dec 2 02:45:29 localhost systemd[1]: Started Session 16 of User ceph-admin. Dec 2 02:45:29 localhost sshd[26308]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:30 localhost systemd-logind[760]: New session 17 of user ceph-admin. Dec 2 02:45:30 localhost systemd[1]: Started Session 17 of User ceph-admin. Dec 2 02:45:30 localhost sshd[26327]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:30 localhost systemd-logind[760]: New session 18 of user ceph-admin. Dec 2 02:45:30 localhost systemd[1]: Started Session 18 of User ceph-admin. Dec 2 02:45:30 localhost sshd[26346]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:30 localhost systemd-logind[760]: New session 19 of user ceph-admin. Dec 2 02:45:30 localhost systemd[1]: Started Session 19 of User ceph-admin. Dec 2 02:45:31 localhost sshd[26365]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:31 localhost systemd-logind[760]: New session 20 of user ceph-admin. Dec 2 02:45:31 localhost systemd[1]: Started Session 20 of User ceph-admin. Dec 2 02:45:31 localhost sshd[26384]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:31 localhost systemd-logind[760]: New session 21 of user ceph-admin. Dec 2 02:45:31 localhost systemd[1]: Started Session 21 of User ceph-admin. Dec 2 02:45:31 localhost sshd[26403]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:31 localhost systemd-logind[760]: New session 22 of user ceph-admin. Dec 2 02:45:31 localhost systemd[1]: Started Session 22 of User ceph-admin. Dec 2 02:45:32 localhost sshd[26422]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:32 localhost systemd-logind[760]: New session 23 of user ceph-admin. Dec 2 02:45:32 localhost systemd[1]: Started Session 23 of User ceph-admin. Dec 2 02:45:32 localhost sshd[26441]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:32 localhost systemd-logind[760]: New session 24 of user ceph-admin. Dec 2 02:45:32 localhost systemd[1]: Started Session 24 of User ceph-admin. Dec 2 02:45:32 localhost sshd[26458]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:33 localhost systemd-logind[760]: New session 25 of user ceph-admin. Dec 2 02:45:33 localhost systemd[1]: Started Session 25 of User ceph-admin. Dec 2 02:45:33 localhost sshd[26477]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:33 localhost systemd-logind[760]: New session 26 of user ceph-admin. Dec 2 02:45:33 localhost systemd[1]: Started Session 26 of User ceph-admin. Dec 2 02:45:33 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:50 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26696 (sysctl) Dec 2 02:45:50 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Dec 2 02:45:50 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Dec 2 02:45:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:55 localhost kernel: VFS: idmapped mount is not enabled. Dec 2 02:46:11 localhost sshd[26932]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:46:15 localhost podman[26837]: Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:46:15.70425502 +0000 UTC m=+24.066758952 container create 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, GIT_BRANCH=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph) Dec 2 02:46:15 localhost systemd[1]: Created slice Slice /machine. Dec 2 02:46:15 localhost systemd[1]: Started libpod-conmon-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope. Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:45:51.68218483 +0000 UTC m=+0.044688782 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:15 localhost systemd[1]: Started libcrun container. Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:46:15.836357446 +0000 UTC m=+24.198861408 container init 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.41.4, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:46:15.846225969 +0000 UTC m=+24.208729931 container start 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, ceph=True, name=rhceph, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:46:15.846864748 +0000 UTC m=+24.209368750 container attach 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 2 02:46:15 localhost eager_mendeleev[26940]: 167 167 Dec 2 02:46:15 localhost systemd[1]: libpod-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope: Deactivated successfully. Dec 2 02:46:15 localhost podman[26837]: 2025-12-02 07:46:15.850174399 +0000 UTC m=+24.212678401 container died 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:46:15 localhost podman[26945]: 2025-12-02 07:46:15.940818196 +0000 UTC m=+0.078235097 container remove 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, name=rhceph, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:46:15 localhost systemd[1]: libpod-conmon-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope: Deactivated successfully. Dec 2 02:46:16 localhost podman[27026]: Dec 2 02:46:16 localhost podman[27026]: 2025-12-02 07:46:16.143849605 +0000 UTC m=+0.044504705 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:16 localhost systemd[1]: var-lib-containers-storage-overlay-5552358c14b5098670e608db9adf2e42b0f1c9fab4f8eb8fb57ce98f077e09c9-merged.mount: Deactivated successfully. Dec 2 02:46:17 localhost sshd[27041]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:46:18 localhost sshd[27133]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:46:19 localhost podman[27026]: 2025-12-02 07:46:19.685965999 +0000 UTC m=+3.586621079 container create 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, version=7, distribution-scope=public) Dec 2 02:46:19 localhost systemd[1]: Started libpod-conmon-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope. Dec 2 02:46:19 localhost systemd[1]: Started libcrun container. Dec 2 02:46:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:20 localhost podman[27026]: 2025-12-02 07:46:20.022610311 +0000 UTC m=+3.923265411 container init 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 2 02:46:20 localhost podman[27026]: 2025-12-02 07:46:20.033615158 +0000 UTC m=+3.934270258 container start 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, name=rhceph, ceph=True, release=1763362218, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Dec 2 02:46:20 localhost podman[27026]: 2025-12-02 07:46:20.033892487 +0000 UTC m=+3.934547607 container attach 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:20 localhost admiring_cartwright[27138]: [ Dec 2 02:46:20 localhost admiring_cartwright[27138]: { Dec 2 02:46:20 localhost admiring_cartwright[27138]: "available": false, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "ceph_device": false, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "lsm_data": {}, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "lvs": [], Dec 2 02:46:20 localhost admiring_cartwright[27138]: "path": "/dev/sr0", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "rejected_reasons": [ Dec 2 02:46:20 localhost admiring_cartwright[27138]: "Has a FileSystem", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "Insufficient space (<5GB)" Dec 2 02:46:20 localhost admiring_cartwright[27138]: ], Dec 2 02:46:20 localhost admiring_cartwright[27138]: "sys_api": { Dec 2 02:46:20 localhost admiring_cartwright[27138]: "actuators": null, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "device_nodes": "sr0", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "human_readable_size": "482.00 KB", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "id_bus": "ata", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "model": "QEMU DVD-ROM", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "nr_requests": "2", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "partitions": {}, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "path": "/dev/sr0", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "removable": "1", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "rev": "2.5+", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "ro": "0", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "rotational": "1", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "sas_address": "", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "sas_device_handle": "", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "scheduler_mode": "mq-deadline", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "sectors": 0, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "sectorsize": "2048", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "size": 493568.0, Dec 2 02:46:20 localhost admiring_cartwright[27138]: "support_discard": "0", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "type": "disk", Dec 2 02:46:20 localhost admiring_cartwright[27138]: "vendor": "QEMU" Dec 2 02:46:20 localhost admiring_cartwright[27138]: } Dec 2 02:46:20 localhost admiring_cartwright[27138]: } Dec 2 02:46:20 localhost admiring_cartwright[27138]: ] Dec 2 02:46:20 localhost systemd[1]: libpod-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope: Deactivated successfully. Dec 2 02:46:20 localhost podman[27026]: 2025-12-02 07:46:20.83843907 +0000 UTC m=+4.739094220 container died 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public) Dec 2 02:46:20 localhost systemd[1]: tmp-crun.6pY4V0.mount: Deactivated successfully. Dec 2 02:46:20 localhost systemd[1]: var-lib-containers-storage-overlay-e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867-merged.mount: Deactivated successfully. Dec 2 02:46:20 localhost podman[28523]: 2025-12-02 07:46:20.915303804 +0000 UTC m=+0.066409985 container remove 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 2 02:46:20 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:20 localhost systemd[1]: libpod-conmon-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope: Deactivated successfully. Dec 2 02:46:21 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Dec 2 02:46:21 localhost systemd[1]: Closed Process Core Dump Socket. Dec 2 02:46:21 localhost systemd[1]: Stopping Process Core Dump Socket... Dec 2 02:46:21 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 2 02:46:21 localhost systemd[1]: Reloading. Dec 2 02:46:21 localhost systemd-sysv-generator[28610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:21 localhost systemd-rc-local-generator[28607]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:21 localhost systemd[1]: Reloading. Dec 2 02:46:21 localhost systemd-rc-local-generator[28643]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:21 localhost systemd-sysv-generator[28649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:49 localhost podman[28726]: Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.434098717 +0000 UTC m=+0.064215993 container create 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 02:46:49 localhost systemd[1]: Started libpod-conmon-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope. Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.402854499 +0000 UTC m=+0.032971775 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:49 localhost systemd[1]: Started libcrun container. Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.518629024 +0000 UTC m=+0.148746300 container init 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, architecture=x86_64, release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.529091651 +0000 UTC m=+0.159208917 container start 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.529378522 +0000 UTC m=+0.159495828 container attach 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1763362218) Dec 2 02:46:49 localhost modest_chaplygin[28742]: 167 167 Dec 2 02:46:49 localhost systemd[1]: libpod-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope: Deactivated successfully. Dec 2 02:46:49 localhost podman[28726]: 2025-12-02 07:46:49.535348896 +0000 UTC m=+0.165466152 container died 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Dec 2 02:46:49 localhost podman[28747]: 2025-12-02 07:46:49.622648869 +0000 UTC m=+0.075483814 container remove 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph) Dec 2 02:46:49 localhost systemd[1]: libpod-conmon-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope: Deactivated successfully. Dec 2 02:46:49 localhost systemd[1]: Reloading. Dec 2 02:46:49 localhost systemd-sysv-generator[28793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:49 localhost systemd-rc-local-generator[28789]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:49 localhost systemd[1]: Reloading. Dec 2 02:46:50 localhost systemd-rc-local-generator[28826]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:50 localhost systemd-sysv-generator[28829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:50 localhost systemd[1]: Reached target All Ceph clusters and services. Dec 2 02:46:50 localhost systemd[1]: Reloading. Dec 2 02:46:50 localhost systemd-rc-local-generator[28865]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:50 localhost systemd-sysv-generator[28870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:50 localhost systemd[1]: Reached target Ceph cluster c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:50 localhost systemd[1]: Reloading. Dec 2 02:46:50 localhost systemd-rc-local-generator[28904]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:50 localhost systemd-sysv-generator[28910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:50 localhost systemd[1]: Reloading. Dec 2 02:46:50 localhost systemd-rc-local-generator[28945]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:50 localhost systemd-sysv-generator[28950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:50 localhost systemd[1]: Created slice Slice /system/ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:50 localhost systemd[1]: Reached target System Time Set. Dec 2 02:46:50 localhost systemd[1]: Reached target System Time Synchronized. Dec 2 02:46:50 localhost systemd[1]: Starting Ceph crash.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:46:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:51 localhost podman[29009]: Dec 2 02:46:51 localhost podman[29009]: 2025-12-02 07:46:51.23193845 +0000 UTC m=+0.079346294 container create 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, version=7, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218) Dec 2 02:46:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/etc/ceph/ceph.client.crash.np0005541914.keyring supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:51 localhost podman[29009]: 2025-12-02 07:46:51.202846016 +0000 UTC m=+0.050253890 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:51 localhost podman[29009]: 2025-12-02 07:46:51.326136613 +0000 UTC m=+0.173544447 container init 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:51 localhost podman[29009]: 2025-12-02 07:46:51.337818628 +0000 UTC m=+0.185226472 container start 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z) Dec 2 02:46:51 localhost bash[29009]: 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c Dec 2 02:46:51 localhost systemd[1]: Started Ceph crash.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: INFO:ceph-crash:pinging cluster to exercise our key Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.534+0000 7f2c7fba3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.534+0000 7f2c7fba3640 -1 AuthRegistry(0x7f2c780680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.535+0000 7f2c7fba3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.535+0000 7f2c7fba3640 -1 AuthRegistry(0x7f2c7fba2000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.543+0000 7f2c7d918640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.544+0000 7f2c7d117640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.545+0000 7f2c7e119640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.545+0000 7f2c7fba3640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: [errno 13] RADOS permission denied (error connecting to the cluster) Dec 2 02:46:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Dec 2 02:46:52 localhost systemd[1]: tmp-crun.e5LWQ3.mount: Deactivated successfully. Dec 2 02:46:54 localhost podman[29108]: Dec 2 02:46:54 localhost podman[29108]: 2025-12-02 07:46:54.92827539 +0000 UTC m=+0.078469281 container create eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux ) Dec 2 02:46:54 localhost systemd[1]: Started libpod-conmon-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope. Dec 2 02:46:54 localhost systemd[1]: Started libcrun container. Dec 2 02:46:54 localhost podman[29108]: 2025-12-02 07:46:54.896100015 +0000 UTC m=+0.046293926 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:55 localhost podman[29108]: 2025-12-02 07:46:55.008222806 +0000 UTC m=+0.158416677 container init eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:46:55 localhost systemd[1]: tmp-crun.jEh5WT.mount: Deactivated successfully. Dec 2 02:46:55 localhost podman[29108]: 2025-12-02 07:46:55.02113899 +0000 UTC m=+0.171332871 container start eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:46:55 localhost elastic_grothendieck[29123]: 167 167 Dec 2 02:46:55 localhost systemd[1]: libpod-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope: Deactivated successfully. Dec 2 02:46:55 localhost podman[29108]: 2025-12-02 07:46:55.02165352 +0000 UTC m=+0.171847391 container attach eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:46:55 localhost podman[29108]: 2025-12-02 07:46:55.034861305 +0000 UTC m=+0.185055196 container died eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , release=1763362218, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Dec 2 02:46:55 localhost podman[29128]: 2025-12-02 07:46:55.119288256 +0000 UTC m=+0.082302629 container remove eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:46:55 localhost systemd[1]: libpod-conmon-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope: Deactivated successfully. Dec 2 02:46:55 localhost podman[29148]: Dec 2 02:46:55 localhost podman[29148]: 2025-12-02 07:46:55.354487926 +0000 UTC m=+0.075634270 container create 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:46:55 localhost systemd[1]: Started libpod-conmon-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope. Dec 2 02:46:55 localhost systemd[1]: Started libcrun container. Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost podman[29148]: 2025-12-02 07:46:55.330333884 +0000 UTC m=+0.051480248 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost podman[29148]: 2025-12-02 07:46:55.477496702 +0000 UTC m=+0.198643056 container init 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:46:55 localhost podman[29148]: 2025-12-02 07:46:55.488183899 +0000 UTC m=+0.209330223 container start 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Dec 2 02:46:55 localhost podman[29148]: 2025-12-02 07:46:55.488402457 +0000 UTC m=+0.209548821 container attach 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:46:55 localhost systemd[1]: tmp-crun.7J9zDH.mount: Deactivated successfully. Dec 2 02:46:55 localhost systemd[1]: var-lib-containers-storage-overlay-f206391c52d49a4b86af2ec2006fd5c884ec2aecfa3ca60806c0f52919ef512c-merged.mount: Deactivated successfully. Dec 2 02:46:55 localhost funny_poitras[29164]: --> passed data devices: 0 physical, 2 LVM Dec 2 02:46:55 localhost funny_poitras[29164]: --> relative data size: 1.0 Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 27399dc0-3412-47da-81e0-87f9f4a96daf Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:56 localhost lvm[29218]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:46:56 localhost lvm[29218]: VG ceph_vg0 finished Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 2 02:46:56 localhost funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap Dec 2 02:46:57 localhost funny_poitras[29164]: stderr: got monmap epoch 3 Dec 2 02:46:57 localhost funny_poitras[29164]: --> Creating keyring file for osd.1 Dec 2 02:46:57 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Dec 2 02:46:57 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Dec 2 02:46:57 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 27399dc0-3412-47da-81e0-87f9f4a96daf --setuser ceph --setgroup ceph Dec 2 02:46:59 localhost funny_poitras[29164]: stderr: 2025-12-02T07:46:57.201+0000 7f3895057a80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 2 02:46:59 localhost funny_poitras[29164]: stderr: 2025-12-02T07:46:57.201+0000 7f3895057a80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid Dec 2 02:46:59 localhost funny_poitras[29164]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:46:59 localhost funny_poitras[29164]: --> ceph-volume lvm activate successful for osd ID: 1 Dec 2 02:46:59 localhost funny_poitras[29164]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:59 localhost funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e70bab01-7143-4db1-8b99-c97ca4b22476 Dec 2 02:47:00 localhost lvm[30163]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:47:00 localhost lvm[30163]: VG ceph_vg1 finished Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap Dec 2 02:47:00 localhost funny_poitras[29164]: stderr: got monmap epoch 3 Dec 2 02:47:00 localhost funny_poitras[29164]: --> Creating keyring file for osd.4 Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Dec 2 02:47:00 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid e70bab01-7143-4db1-8b99-c97ca4b22476 --setuser ceph --setgroup ceph Dec 2 02:47:03 localhost funny_poitras[29164]: stderr: 2025-12-02T07:47:00.987+0000 7f606666ba80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 2 02:47:03 localhost funny_poitras[29164]: stderr: 2025-12-02T07:47:00.987+0000 7f606666ba80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid Dec 2 02:47:03 localhost funny_poitras[29164]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:03 localhost funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:03 localhost funny_poitras[29164]: --> ceph-volume lvm activate successful for osd ID: 4 Dec 2 02:47:03 localhost funny_poitras[29164]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Dec 2 02:47:03 localhost systemd[1]: libpod-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Deactivated successfully. Dec 2 02:47:03 localhost systemd[1]: libpod-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Consumed 3.766s CPU time. Dec 2 02:47:03 localhost podman[31077]: 2025-12-02 07:47:03.632537383 +0000 UTC m=+0.036706172 container died 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:47:03 localhost systemd[1]: var-lib-containers-storage-overlay-ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c-merged.mount: Deactivated successfully. Dec 2 02:47:03 localhost podman[31077]: 2025-12-02 07:47:03.669898799 +0000 UTC m=+0.074067568 container remove 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Dec 2 02:47:03 localhost systemd[1]: libpod-conmon-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Deactivated successfully. Dec 2 02:47:04 localhost podman[31163]: Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.36271235 +0000 UTC m=+0.062436685 container create f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git) Dec 2 02:47:04 localhost systemd[1]: Started libpod-conmon-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope. Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.334622094 +0000 UTC m=+0.034346429 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:04 localhost systemd[1]: Started libcrun container. Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.472957738 +0000 UTC m=+0.172682073 container init f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.482138096 +0000 UTC m=+0.181862431 container start f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.482715138 +0000 UTC m=+0.182439523 container attach f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main) Dec 2 02:47:04 localhost practical_hypatia[31178]: 167 167 Dec 2 02:47:04 localhost systemd[1]: libpod-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope: Deactivated successfully. Dec 2 02:47:04 localhost podman[31163]: 2025-12-02 07:47:04.486128251 +0000 UTC m=+0.185852596 container died f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:47:04 localhost podman[31183]: 2025-12-02 07:47:04.58048728 +0000 UTC m=+0.081111223 container remove f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Dec 2 02:47:04 localhost systemd[1]: libpod-conmon-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope: Deactivated successfully. Dec 2 02:47:04 localhost systemd[1]: var-lib-containers-storage-overlay-3254d7836790191d0343d08b2b1e49e4862bbe42c4dbd69830baa428da8f1264-merged.mount: Deactivated successfully. Dec 2 02:47:04 localhost podman[31202]: Dec 2 02:47:04 localhost podman[31202]: 2025-12-02 07:47:04.777195319 +0000 UTC m=+0.078321695 container create a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 2 02:47:04 localhost systemd[1]: Started libpod-conmon-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope. Dec 2 02:47:04 localhost systemd[1]: Started libcrun container. Dec 2 02:47:04 localhost podman[31202]: 2025-12-02 07:47:04.746116567 +0000 UTC m=+0.047242913 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost podman[31202]: 2025-12-02 07:47:04.891403432 +0000 UTC m=+0.192529778 container init a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=) Dec 2 02:47:04 localhost podman[31202]: 2025-12-02 07:47:04.902308827 +0000 UTC m=+0.203435173 container start a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4) Dec 2 02:47:04 localhost podman[31202]: 2025-12-02 07:47:04.902961062 +0000 UTC m=+0.204087468 container attach a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph) Dec 2 02:47:05 localhost nice_cohen[31218]: { Dec 2 02:47:05 localhost nice_cohen[31218]: "1": [ Dec 2 02:47:05 localhost nice_cohen[31218]: { Dec 2 02:47:05 localhost nice_cohen[31218]: "devices": [ Dec 2 02:47:05 localhost nice_cohen[31218]: "/dev/loop3" Dec 2 02:47:05 localhost nice_cohen[31218]: ], Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_name": "ceph_lv0", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_size": "7511998464", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=27399dc0-3412-47da-81e0-87f9f4a96daf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_uuid": "NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf", Dec 2 02:47:05 localhost nice_cohen[31218]: "name": "ceph_lv0", Dec 2 02:47:05 localhost nice_cohen[31218]: "path": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:05 localhost nice_cohen[31218]: "tags": { Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.block_uuid": "NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cephx_lockbox_secret": "", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cluster_name": "ceph", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.crush_device_class": "", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.encrypted": "0", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osd_fsid": "27399dc0-3412-47da-81e0-87f9f4a96daf", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osd_id": "1", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osdspec_affinity": "default_drive_group", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.type": "block", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.vdo": "0" Dec 2 02:47:05 localhost nice_cohen[31218]: }, Dec 2 02:47:05 localhost nice_cohen[31218]: "type": "block", Dec 2 02:47:05 localhost nice_cohen[31218]: "vg_name": "ceph_vg0" Dec 2 02:47:05 localhost nice_cohen[31218]: } Dec 2 02:47:05 localhost nice_cohen[31218]: ], Dec 2 02:47:05 localhost nice_cohen[31218]: "4": [ Dec 2 02:47:05 localhost nice_cohen[31218]: { Dec 2 02:47:05 localhost nice_cohen[31218]: "devices": [ Dec 2 02:47:05 localhost nice_cohen[31218]: "/dev/loop4" Dec 2 02:47:05 localhost nice_cohen[31218]: ], Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_name": "ceph_lv1", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_size": "7511998464", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e70bab01-7143-4db1-8b99-c97ca4b22476,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 2 02:47:05 localhost nice_cohen[31218]: "lv_uuid": "Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4", Dec 2 02:47:05 localhost nice_cohen[31218]: "name": "ceph_lv1", Dec 2 02:47:05 localhost nice_cohen[31218]: "path": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:05 localhost nice_cohen[31218]: "tags": { Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.block_uuid": "Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cephx_lockbox_secret": "", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.cluster_name": "ceph", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.crush_device_class": "", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.encrypted": "0", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osd_fsid": "e70bab01-7143-4db1-8b99-c97ca4b22476", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osd_id": "4", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.osdspec_affinity": "default_drive_group", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.type": "block", Dec 2 02:47:05 localhost nice_cohen[31218]: "ceph.vdo": "0" Dec 2 02:47:05 localhost nice_cohen[31218]: }, Dec 2 02:47:05 localhost nice_cohen[31218]: "type": "block", Dec 2 02:47:05 localhost nice_cohen[31218]: "vg_name": "ceph_vg1" Dec 2 02:47:05 localhost nice_cohen[31218]: } Dec 2 02:47:05 localhost nice_cohen[31218]: ] Dec 2 02:47:05 localhost nice_cohen[31218]: } Dec 2 02:47:05 localhost systemd[1]: libpod-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope: Deactivated successfully. Dec 2 02:47:05 localhost podman[31202]: 2025-12-02 07:47:05.305945644 +0000 UTC m=+0.607072040 container died a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, ceph=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=) Dec 2 02:47:05 localhost podman[31227]: 2025-12-02 07:47:05.392092012 +0000 UTC m=+0.077464741 container remove a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Dec 2 02:47:05 localhost systemd[1]: libpod-conmon-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope: Deactivated successfully. Dec 2 02:47:05 localhost systemd[1]: var-lib-containers-storage-overlay-7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c-merged.mount: Deactivated successfully. Dec 2 02:47:06 localhost podman[31313]: Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.108843757 +0000 UTC m=+0.057676130 container create 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:47:06 localhost systemd[1]: Started libpod-conmon-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope. Dec 2 02:47:06 localhost systemd[1]: Started libcrun container. Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.174368581 +0000 UTC m=+0.123200924 container init 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218) Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.080664368 +0000 UTC m=+0.029496771 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.184185524 +0000 UTC m=+0.133017927 container start 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main) Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.184432734 +0000 UTC m=+0.133265077 container attach 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, name=rhceph, architecture=x86_64, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:06 localhost romantic_jackson[31328]: 167 167 Dec 2 02:47:06 localhost systemd[1]: libpod-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope: Deactivated successfully. Dec 2 02:47:06 localhost podman[31313]: 2025-12-02 07:47:06.187096448 +0000 UTC m=+0.135928831 container died 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Dec 2 02:47:06 localhost podman[31333]: 2025-12-02 07:47:06.275175141 +0000 UTC m=+0.078194570 container remove 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, release=1763362218) Dec 2 02:47:06 localhost systemd[1]: libpod-conmon-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope: Deactivated successfully. Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.563341376 +0000 UTC m=+0.042680605 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:06 localhost podman[31361]: Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.59113859 +0000 UTC m=+0.070477809 container create f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, vcs-type=git, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, release=1763362218, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:47:06 localhost systemd[1]: Started libpod-conmon-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope. Dec 2 02:47:06 localhost systemd[1]: Started libcrun container. Dec 2 02:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-1f5f03fd0defdf4be6f50a6bd2280c9157dd5cb009a445f1ea0fe590b44c5439-merged.mount: Deactivated successfully. Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.712317224 +0000 UTC m=+0.191656443 container init f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=) Dec 2 02:47:06 localhost systemd[1]: tmp-crun.utJxGp.mount: Deactivated successfully. Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.724917326 +0000 UTC m=+0.204256555 container start f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.725224327 +0000 UTC m=+0.204563586 container attach f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4) Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]: [--no-systemd] [--no-tmpfs] Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 2 02:47:06 localhost systemd[1]: libpod-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope: Deactivated successfully. Dec 2 02:47:06 localhost podman[31361]: 2025-12-02 07:47:06.945939873 +0000 UTC m=+0.425279152 container died f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:47:07 localhost systemd[1]: var-lib-containers-storage-overlay-f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb-merged.mount: Deactivated successfully. Dec 2 02:47:07 localhost systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Dec 2 02:47:07 localhost systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:47:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:47:07 localhost podman[31381]: 2025-12-02 07:47:07.04233564 +0000 UTC m=+0.086825726 container remove f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True) Dec 2 02:47:07 localhost systemd[1]: libpod-conmon-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope: Deactivated successfully. Dec 2 02:47:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:47:07 localhost systemd[1]: Reloading. Dec 2 02:47:07 localhost systemd-sysv-generator[31444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:07 localhost systemd-rc-local-generator[31440]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:07 localhost systemd[1]: Reloading. Dec 2 02:47:07 localhost systemd-rc-local-generator[31479]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:07 localhost systemd-sysv-generator[31484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:07 localhost systemd[1]: Starting Ceph osd.1 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:47:08 localhost podman[31545]: Dec 2 02:47:08 localhost podman[31545]: 2025-12-02 07:47:08.249178112 +0000 UTC m=+0.088086376 container create 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph) Dec 2 02:47:08 localhost systemd[1]: tmp-crun.CkIe7G.mount: Deactivated successfully. Dec 2 02:47:08 localhost podman[31545]: 2025-12-02 07:47:08.207672694 +0000 UTC m=+0.046580988 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:08 localhost systemd[1]: Started libcrun container. Dec 2 02:47:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:08 localhost podman[31545]: 2025-12-02 07:47:08.380111916 +0000 UTC m=+0.219020190 container init 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, ceph=True, release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:08 localhost podman[31545]: 2025-12-02 07:47:08.390776302 +0000 UTC m=+0.229684566 container start 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, version=7, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:08 localhost podman[31545]: 2025-12-02 07:47:08.391069583 +0000 UTC m=+0.229977927 container attach 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:47:09 localhost bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 2 02:47:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: --> ceph-volume raw activate successful for osd ID: 1 Dec 2 02:47:09 localhost bash[31545]: --> ceph-volume raw activate successful for osd ID: 1 Dec 2 02:47:09 localhost systemd[1]: libpod-2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc.scope: Deactivated successfully. Dec 2 02:47:09 localhost podman[31545]: 2025-12-02 07:47:09.172410065 +0000 UTC m=+1.011318339 container died 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:47:09 localhost systemd[1]: var-lib-containers-storage-overlay-0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7-merged.mount: Deactivated successfully. Dec 2 02:47:09 localhost podman[31691]: 2025-12-02 07:47:09.271592123 +0000 UTC m=+0.087496253 container remove 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public) Dec 2 02:47:09 localhost podman[31752]: Dec 2 02:47:09 localhost podman[31752]: 2025-12-02 07:47:09.631864068 +0000 UTC m=+0.071376033 container create 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost podman[31752]: 2025-12-02 07:47:09.604812184 +0000 UTC m=+0.044324149 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost podman[31752]: 2025-12-02 07:47:09.754850883 +0000 UTC m=+0.194362858 container init 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, ceph=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Dec 2 02:47:09 localhost podman[31752]: 2025-12-02 07:47:09.7647591 +0000 UTC m=+0.204271065 container start 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:47:09 localhost bash[31752]: 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be Dec 2 02:47:09 localhost systemd[1]: Started Ceph osd.1 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:47:09 localhost ceph-osd[31770]: set uid:gid to 167:167 (ceph:ceph) Dec 2 02:47:09 localhost ceph-osd[31770]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 2 02:47:09 localhost ceph-osd[31770]: pidfile_write: ignore empty --pid-file Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:09 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:09 localhost ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 2 02:47:09 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) close Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close Dec 2 02:47:10 localhost systemd[1]: tmp-crun.AEtDqd.mount: Deactivated successfully. Dec 2 02:47:10 localhost ceph-osd[31770]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal Dec 2 02:47:10 localhost ceph-osd[31770]: load: jerasure load: lrc Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close Dec 2 02:47:10 localhost podman[31861]: Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.628483884 +0000 UTC m=+0.071623303 container create f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:10 localhost systemd[1]: Started libpod-conmon-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope. Dec 2 02:47:10 localhost systemd[1]: Started libcrun container. Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.598010575 +0000 UTC m=+0.041149974 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.702458147 +0000 UTC m=+0.145597546 container init f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, architecture=x86_64, com.redhat.component=rhceph-container, release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Dec 2 02:47:10 localhost funny_jepsen[31880]: 167 167 Dec 2 02:47:10 localhost systemd[1]: libpod-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope: Deactivated successfully. Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.715108491 +0000 UTC m=+0.158247920 container start f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git) Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.715420203 +0000 UTC m=+0.158559652 container attach f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, distribution-scope=public, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.expose-services=) Dec 2 02:47:10 localhost podman[31861]: 2025-12-02 07:47:10.718601167 +0000 UTC m=+0.161740626 container died f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, ceph=True, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:47:10 localhost podman[31885]: 2025-12-02 07:47:10.809906717 +0000 UTC m=+0.081005099 container remove f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, release=1763362218, vcs-type=git, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, RELEASE=main) Dec 2 02:47:10 localhost systemd[1]: libpod-conmon-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope: Deactivated successfully. Dec 2 02:47:10 localhost ceph-osd[31770]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 2 02:47:10 localhost ceph-osd[31770]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 2 02:47:10 localhost ceph-osd[31770]: bluefs mount Dec 2 02:47:10 localhost ceph-osd[31770]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:10 localhost ceph-osd[31770]: bluefs mount shared_bdev_used = 0 Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Git sha 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: DB SUMMARY Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: DB Session ID: LVMBL2HNWNG9X0KVIREE Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.env: 0x56102cd99c70 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.info_log: 0x56102cf1cbe0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.statistics: (nil) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.db_log_dir: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_manager: 0x56102bf68140 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.row_cache: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.wal_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Compression algorithms supported: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf56850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5e44943-061a-4b6e-9a59-1f91106222d9 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630928004, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630928256, "job": 1, "event": "recovery_finished"} Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025 Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240 Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000 Dec 2 02:47:10 localhost ceph-osd[31770]: freelist init Dec 2 02:47:10 localhost ceph-osd[31770]: freelist _read_cfg Dec 2 02:47:10 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 2 02:47:10 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 2 02:47:10 localhost ceph-osd[31770]: bluefs umount Dec 2 02:47:10 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) close Dec 2 02:47:11 localhost podman[32106]: Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.149420433 +0000 UTC m=+0.069975189 container create 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, version=7, ceph=True, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 02:47:11 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 2 02:47:11 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 2 02:47:11 localhost ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:11 localhost ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 2 02:47:11 localhost ceph-osd[31770]: bluefs mount Dec 2 02:47:11 localhost ceph-osd[31770]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:11 localhost ceph-osd[31770]: bluefs mount shared_bdev_used = 4718592 Dec 2 02:47:11 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Git sha 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: DB SUMMARY Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: DB Session ID: LVMBL2HNWNG9X0KVIREF Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.env: 0x56102c00a700 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.info_log: 0x56102cf488e0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.statistics: (nil) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.db_log_dir: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_manager: 0x56102bf695e0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.row_cache: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.wal_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Compression algorithms supported: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost systemd[1]: Started libpod-conmon-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope. Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf562d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost systemd[1]: Started libcrun container. Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf57610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf57610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.merge_operator: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56102bf57610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression: LZ4 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.num_levels: 7 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5e44943-061a-4b6e-9a59-1f91106222d9 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631209801, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631215833, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631220609, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.121961403 +0000 UTC m=+0.042516149 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631230161, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631235584, "job": 1, "event": "recovery_finished"} Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 2 02:47:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:11 localhost systemd[1]: tmp-crun.hvM3ZP.mount: Deactivated successfully. Dec 2 02:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-24d99e87a9caef3e8a709065b423d27bec43ce946290715f03773f8cd58f18f2-merged.mount: Deactivated successfully. Dec 2 02:47:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56102bfbe700 Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: DB pointer 0x56102ce79a00 Dec 2 02:47:11 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:11 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4 Dec 2 02:47:11 localhost ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usag Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.294620594 +0000 UTC m=+0.215175320 container init 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:11 localhost ceph-osd[31770]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 2 02:47:11 localhost ceph-osd[31770]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 2 02:47:11 localhost ceph-osd[31770]: _get_class not permitted to load lua Dec 2 02:47:11 localhost ceph-osd[31770]: _get_class not permitted to load sdk Dec 2 02:47:11 localhost ceph-osd[31770]: _get_class not permitted to load test_remote_reads Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.304287091 +0000 UTC m=+0.224841837 container start 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, distribution-scope=public, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux ) Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.30452686 +0000 UTC m=+0.225081606 container attach 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, release=1763362218) Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 load_pgs Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 load_pgs opened 0 pgs Dec 2 02:47:11 localhost ceph-osd[31770]: osd.1 0 log_to_monitors true Dec 2 02:47:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:11.306+0000 7f831d222a80 -1 osd.1 0 log_to_monitors true Dec 2 02:47:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 2 02:47:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]: [--no-systemd] [--no-tmpfs] Dec 2 02:47:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 2 02:47:11 localhost systemd[1]: libpod-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope: Deactivated successfully. Dec 2 02:47:11 localhost podman[32106]: 2025-12-02 07:47:11.527111299 +0000 UTC m=+0.447666085 container died 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main) Dec 2 02:47:11 localhost systemd[1]: var-lib-containers-storage-overlay-857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f-merged.mount: Deactivated successfully. Dec 2 02:47:11 localhost podman[32341]: 2025-12-02 07:47:11.618049994 +0000 UTC m=+0.078954759 container remove 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux ) Dec 2 02:47:11 localhost systemd[1]: libpod-conmon-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope: Deactivated successfully. Dec 2 02:47:11 localhost systemd[1]: Reloading. Dec 2 02:47:11 localhost systemd-sysv-generator[32399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:11 localhost systemd-rc-local-generator[32394]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:12 localhost systemd[1]: Reloading. Dec 2 02:47:12 localhost systemd-rc-local-generator[32437]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:12 localhost systemd-sysv-generator[32442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:12 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 2 02:47:12 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 2 02:47:12 localhost systemd[1]: Starting Ceph osd.4 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:47:12 localhost podman[32499]: Dec 2 02:47:12 localhost podman[32499]: 2025-12-02 07:47:12.764833904 +0000 UTC m=+0.075782406 container create a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:47:12 localhost systemd[1]: Started libcrun container. Dec 2 02:47:12 localhost podman[32499]: 2025-12-02 07:47:12.733819375 +0000 UTC m=+0.044767897 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost podman[32499]: 2025-12-02 07:47:12.893888325 +0000 UTC m=+0.204836827 container init a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Dec 2 02:47:12 localhost podman[32499]: 2025-12-02 07:47:12.90324465 +0000 UTC m=+0.214193152 container start a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Dec 2 02:47:12 localhost podman[32499]: 2025-12-02 07:47:12.903474259 +0000 UTC m=+0.214422761 container attach a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 done with init, starting boot process Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 start_boot Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 2 02:47:13 localhost ceph-osd[31770]: osd.1 0 bench count 12288000 bsize 4 KiB Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:13 localhost bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 2 02:47:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: --> ceph-volume raw activate successful for osd ID: 4 Dec 2 02:47:13 localhost bash[32499]: --> ceph-volume raw activate successful for osd ID: 4 Dec 2 02:47:13 localhost systemd[1]: libpod-a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789.scope: Deactivated successfully. Dec 2 02:47:13 localhost podman[32499]: 2025-12-02 07:47:13.576072781 +0000 UTC m=+0.887021283 container died a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, version=7, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:13 localhost systemd[1]: tmp-crun.GIZjtC.mount: Deactivated successfully. Dec 2 02:47:13 localhost podman[32628]: 2025-12-02 07:47:13.69249282 +0000 UTC m=+0.105910850 container remove a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 02:47:13 localhost systemd[1]: var-lib-containers-storage-overlay-d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079-merged.mount: Deactivated successfully. Dec 2 02:47:14 localhost podman[32689]: Dec 2 02:47:14 localhost podman[32689]: 2025-12-02 07:47:14.048377875 +0000 UTC m=+0.079477810 container create 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph) Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost podman[32689]: 2025-12-02 07:47:14.019164296 +0000 UTC m=+0.050264301 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost podman[32689]: 2025-12-02 07:47:14.17109636 +0000 UTC m=+0.202196295 container init 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, architecture=x86_64, release=1763362218, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Dec 2 02:47:14 localhost podman[32689]: 2025-12-02 07:47:14.199910243 +0000 UTC m=+0.231010178 container start 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:47:14 localhost bash[32689]: 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 Dec 2 02:47:14 localhost systemd[1]: Started Ceph osd.4 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:47:14 localhost ceph-osd[32707]: set uid:gid to 167:167 (ceph:ceph) Dec 2 02:47:14 localhost ceph-osd[32707]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 2 02:47:14 localhost ceph-osd[32707]: pidfile_write: ignore empty --pid-file Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) close Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close Dec 2 02:47:14 localhost ceph-osd[32707]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal Dec 2 02:47:14 localhost ceph-osd[32707]: load: jerasure load: lrc Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:14 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close Dec 2 02:47:15 localhost podman[32798]: Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.127796508 +0000 UTC m=+0.072795839 container create 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=) Dec 2 02:47:15 localhost systemd[1]: Started libpod-conmon-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope. Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.097251257 +0000 UTC m=+0.042250588 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:15 localhost systemd[1]: Started libcrun container. Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.227518216 +0000 UTC m=+0.172517547 container init 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, build-date=2025-11-26T19:44:28Z, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:15 localhost systemd[1]: tmp-crun.WJ4iBY.mount: Deactivated successfully. Dec 2 02:47:15 localhost hungry_snyder[32813]: 167 167 Dec 2 02:47:15 localhost systemd[1]: libpod-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope: Deactivated successfully. Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.253394685 +0000 UTC m=+0.198394016 container start 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.254383903 +0000 UTC m=+0.199383274 container attach 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, release=1763362218, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Dec 2 02:47:15 localhost podman[32798]: 2025-12-02 07:47:15.258008005 +0000 UTC m=+0.203007366 container died 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 02:47:15 localhost ceph-osd[32707]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs mount Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs mount shared_bdev_used = 0 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Git sha 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DB SUMMARY Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DB Session ID: WSVYTTET1R9PTPBU71LV Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.env: 0x5620505dcc40 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.info_log: 0x5620512e8740 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.statistics: (nil) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_log_dir: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_manager: 0x562050332140 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.row_cache: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Compression algorithms supported: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050320850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 565f4603-89cf-4617-b1e1-97bdb3afd91c Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635342679, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635342971, "job": 1, "event": "recovery_finished"} Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000 Dec 2 02:47:15 localhost ceph-osd[32707]: freelist init Dec 2 02:47:15 localhost ceph-osd[32707]: freelist _read_cfg Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs umount Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) close Dec 2 02:47:15 localhost podman[32818]: 2025-12-02 07:47:15.393473636 +0000 UTC m=+0.133828158 container remove 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:47:15 localhost systemd[1]: libpod-conmon-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope: Deactivated successfully. Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 2 02:47:15 localhost ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs mount Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:15 localhost ceph-osd[32707]: bluefs mount shared_bdev_used = 4718592 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:15 localhost podman[33033]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Git sha 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DB SUMMARY Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DB Session ID: WSVYTTET1R9PTPBU71LU Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.env: 0x56205046e310 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.info_log: 0x5620512e9240 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.statistics: (nil) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_log_dir: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_manager: 0x562050332140 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.row_cache: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Compression algorithms supported: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost podman[33033]: 2025-12-02 07:47:15.620359472 +0000 UTC m=+0.093759346 container create 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5620503202d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050321610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050321610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.merge_operator: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562050321610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression: LZ4 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.num_levels: 7 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 565f4603-89cf-4617-b1e1-97bdb3afd91c Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635616423, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635622408, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635645251, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:15 localhost systemd[1]: Started libpod-conmon-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope. Dec 2 02:47:15 localhost systemd[1]: Started libcrun container. Dec 2 02:47:15 localhost podman[33033]: 2025-12-02 07:47:15.574272406 +0000 UTC m=+0.047672300 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635678773, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:15 localhost podman[33033]: 2025-12-02 07:47:15.713760533 +0000 UTC m=+0.187160377 container init 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635716303, "job": 1, "event": "recovery_finished"} Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 2 02:47:15 localhost podman[33033]: 2025-12-02 07:47:15.741080478 +0000 UTC m=+0.214480342 container start 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=) Dec 2 02:47:15 localhost podman[33033]: 2025-12-02 07:47:15.74139093 +0000 UTC m=+0.214790764 container attach 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, version=7, GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562050388700 Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: DB pointer 0x56205123fa00 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4 Dec 2 02:47:15 localhost ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usag Dec 2 02:47:15 localhost ceph-osd[32707]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 2 02:47:15 localhost ceph-osd[32707]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 2 02:47:15 localhost ceph-osd[32707]: _get_class not permitted to load lua Dec 2 02:47:15 localhost ceph-osd[32707]: _get_class not permitted to load sdk Dec 2 02:47:15 localhost ceph-osd[32707]: _get_class not permitted to load test_remote_reads Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 load_pgs Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 load_pgs opened 0 pgs Dec 2 02:47:15 localhost ceph-osd[32707]: osd.4 0 log_to_monitors true Dec 2 02:47:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:15.846+0000 7f36cff76a80 -1 osd.4 0 log_to_monitors true Dec 2 02:47:16 localhost systemd[1]: var-lib-containers-storage-overlay-452369b48fca2f2ac87b01de59b0b8d8712884bae06e23116539c12224dd8f96-merged.mount: Deactivated successfully. Dec 2 02:47:16 localhost sleepy_khayyam[33230]: { Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "27399dc0-3412-47da-81e0-87f9f4a96daf": { Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "osd_id": 1, Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "osd_uuid": "27399dc0-3412-47da-81e0-87f9f4a96daf", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "type": "bluestore" Dec 2 02:47:16 localhost sleepy_khayyam[33230]: }, Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "e70bab01-7143-4db1-8b99-c97ca4b22476": { Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "osd_id": 4, Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "osd_uuid": "e70bab01-7143-4db1-8b99-c97ca4b22476", Dec 2 02:47:16 localhost sleepy_khayyam[33230]: "type": "bluestore" Dec 2 02:47:16 localhost sleepy_khayyam[33230]: } Dec 2 02:47:16 localhost sleepy_khayyam[33230]: } Dec 2 02:47:16 localhost systemd[1]: libpod-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope: Deactivated successfully. Dec 2 02:47:16 localhost podman[33033]: 2025-12-02 07:47:16.276881987 +0000 UTC m=+0.750281931 container died 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public) Dec 2 02:47:16 localhost systemd[1]: var-lib-containers-storage-overlay-7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda-merged.mount: Deactivated successfully. Dec 2 02:47:16 localhost podman[33300]: 2025-12-02 07:47:16.418066822 +0000 UTC m=+0.132669534 container remove 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:16 localhost systemd[1]: libpod-conmon-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope: Deactivated successfully. Dec 2 02:47:16 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 2 02:47:16 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 done with init, starting boot process Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 start_boot Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 2 02:47:17 localhost ceph-osd[32707]: osd.4 0 bench count 12288000 bsize 4 KiB Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 15.718 iops: 4023.857 elapsed_sec: 0.746 Dec 2 02:47:17 localhost ceph-osd[31770]: log_channel(cluster) log [WRN] : OSD bench result of 4023.857035 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 0 waiting for initial osdmap Dec 2 02:47:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:17.446+0000 7f83199b6640 -1 osd.1 0 waiting for initial osdmap Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 crush map has features 288514050185494528, adjusting msgr requires for clients Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 crush map has features 3314932999778484224, adjusting msgr requires for osds Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 set_numa_affinity not setting numa affinity Dec 2 02:47:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:17.486+0000 7f83147cb640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:17 localhost ceph-osd[31770]: osd.1 13 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Dec 2 02:47:18 localhost ceph-osd[31770]: osd.1 14 state: booting -> active Dec 2 02:47:18 localhost podman[33429]: 2025-12-02 07:47:18.677268061 +0000 UTC m=+0.108972069 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64) Dec 2 02:47:18 localhost podman[33429]: 2025-12-02 07:47:18.782765075 +0000 UTC m=+0.214469083 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 02:47:20 localhost ceph-osd[31770]: osd.1 16 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 2 02:47:20 localhost ceph-osd[31770]: osd.1 16 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Dec 2 02:47:20 localhost ceph-osd[31770]: osd.1 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 2 02:47:20 localhost ceph-osd[31770]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 02:47:20 localhost podman[33626]: Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.660909277 +0000 UTC m=+0.049935108 container create 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, name=rhceph) Dec 2 02:47:20 localhost systemd[1]: Started libpod-conmon-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope. Dec 2 02:47:20 localhost systemd[1]: Started libcrun container. Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.638565376 +0000 UTC m=+0.027591207 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.75818068 +0000 UTC m=+0.147206531 container init 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:47:20 localhost wonderful_brahmagupta[33641]: 167 167 Dec 2 02:47:20 localhost systemd[1]: libpod-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope: Deactivated successfully. Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.777838226 +0000 UTC m=+0.166864087 container start 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.778351596 +0000 UTC m=+0.167377447 container attach 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, RELEASE=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:20 localhost podman[33626]: 2025-12-02 07:47:20.780348844 +0000 UTC m=+0.169374685 container died 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:20 localhost systemd[1]: var-lib-containers-storage-overlay-82a02638e2b1e03fa57eaeda5b3473b19a5a5e24ad16cfd971266b3602f13e9e-merged.mount: Deactivated successfully. Dec 2 02:47:20 localhost podman[33646]: 2025-12-02 07:47:20.894149011 +0000 UTC m=+0.104503516 container remove 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., ceph=True, vcs-type=git, GIT_CLEAN=True, release=1763362218) Dec 2 02:47:20 localhost systemd[1]: libpod-conmon-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope: Deactivated successfully. Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.144 iops: 5924.796 elapsed_sec: 0.506 Dec 2 02:47:21 localhost ceph-osd[32707]: log_channel(cluster) log [WRN] : OSD bench result of 5924.795610 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 0 waiting for initial osdmap Dec 2 02:47:21 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:21.042+0000 7f36cc70a640 -1 osd.4 0 waiting for initial osdmap Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 check_osdmap_features require_osd_release unknown -> reef Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 set_numa_affinity not setting numa affinity Dec 2 02:47:21 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:21.062+0000 7f36c751f640 -1 osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 16 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Dec 2 02:47:21 localhost podman[33665]: Dec 2 02:47:21 localhost podman[33665]: 2025-12-02 07:47:21.12370174 +0000 UTC m=+0.082770337 container create b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, GIT_BRANCH=main, architecture=x86_64, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:21 localhost systemd[1]: Started libpod-conmon-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope. Dec 2 02:47:21 localhost systemd[1]: Started libcrun container. Dec 2 02:47:21 localhost podman[33665]: 2025-12-02 07:47:21.095930198 +0000 UTC m=+0.054998805 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:21 localhost ceph-osd[31770]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [1,3] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [1] -> [1,3], acting [1] -> [1,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 02:47:21 localhost ceph-osd[31770]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [1,3] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 02:47:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:21 localhost ceph-osd[32707]: osd.4 17 state: booting -> active Dec 2 02:47:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:21 localhost podman[33665]: 2025-12-02 07:47:21.237938044 +0000 UTC m=+0.197006651 container init b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:47:21 localhost podman[33665]: 2025-12-02 07:47:21.24887589 +0000 UTC m=+0.207944487 container start b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:47:21 localhost podman[33665]: 2025-12-02 07:47:21.249154361 +0000 UTC m=+0.208222978 container attach b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: [ Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: { Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "available": false, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "ceph_device": false, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "lsm_data": {}, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "lvs": [], Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "path": "/dev/sr0", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "rejected_reasons": [ Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "Has a FileSystem", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "Insufficient space (<5GB)" Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: ], Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "sys_api": { Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "actuators": null, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "device_nodes": "sr0", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "human_readable_size": "482.00 KB", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "id_bus": "ata", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "model": "QEMU DVD-ROM", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "nr_requests": "2", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "partitions": {}, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "path": "/dev/sr0", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "removable": "1", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "rev": "2.5+", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "ro": "0", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "rotational": "1", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "sas_address": "", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "sas_device_handle": "", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "scheduler_mode": "mq-deadline", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "sectors": 0, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "sectorsize": "2048", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "size": 493568.0, Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "support_discard": "0", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "type": "disk", Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: "vendor": "QEMU" Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: } Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: } Dec 2 02:47:22 localhost distracted_proskuriakova[33682]: ] Dec 2 02:47:22 localhost systemd[1]: libpod-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope: Deactivated successfully. Dec 2 02:47:22 localhost podman[33665]: 2025-12-02 07:47:22.084055492 +0000 UTC m=+1.043124059 container died b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, GIT_CLEAN=True) Dec 2 02:47:22 localhost systemd[1]: var-lib-containers-storage-overlay-a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c-merged.mount: Deactivated successfully. Dec 2 02:47:22 localhost podman[34995]: 2025-12-02 07:47:22.147410622 +0000 UTC m=+0.056769434 container remove b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 2 02:47:22 localhost systemd[1]: libpod-conmon-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope: Deactivated successfully. Dec 2 02:47:22 localhost ceph-osd[31770]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [1,3] -> [1,5,3], acting [1,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 02:47:22 localhost ceph-osd[31770]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 02:47:23 localhost ceph-osd[31770]: osd.1 pg_epoch: 19 pg[1.0( empty local-lis/les=18/19 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 02:47:31 localhost systemd[26272]: Starting Mark boot as successful... Dec 2 02:47:31 localhost podman[35122]: 2025-12-02 07:47:31.506152792 +0000 UTC m=+0.105314878 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:47:31 localhost systemd[26272]: Finished Mark boot as successful. Dec 2 02:47:31 localhost podman[35122]: 2025-12-02 07:47:31.610365814 +0000 UTC m=+0.209527930 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public) Dec 2 02:47:36 localhost sshd[35207]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:47:37 localhost sshd[35209]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:48:15 localhost sshd[35211]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:48:33 localhost podman[35314]: 2025-12-02 07:48:33.51165103 +0000 UTC m=+0.091257036 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:48:33 localhost podman[35314]: 2025-12-02 07:48:33.616325567 +0000 UTC m=+0.195931573 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:48:42 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Dec 2 02:48:42 localhost systemd[1]: session-13.scope: Deactivated successfully. Dec 2 02:48:42 localhost systemd[1]: session-13.scope: Consumed 22.531s CPU time. Dec 2 02:48:42 localhost systemd-logind[760]: Removed session 13. Dec 2 02:48:50 localhost sshd[35459]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:48:56 localhost sshd[35461]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:50:02 localhost sshd[35539]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:50:11 localhost sshd[35541]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:50:17 localhost sshd[35543]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:50:54 localhost systemd[26272]: Created slice User Background Tasks Slice. Dec 2 02:50:54 localhost systemd[26272]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 02:50:54 localhost systemd[26272]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 02:51:17 localhost sshd[35623]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:51:27 localhost sshd[35625]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:51:44 localhost sshd[35703]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:06 localhost sshd[35705]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:06 localhost systemd-logind[760]: New session 27 of user zuul. Dec 2 02:52:06 localhost systemd[1]: Started Session 27 of User zuul. Dec 2 02:52:06 localhost python3[35753]: ansible-ansible.legacy.ping Invoked with data=pong Dec 2 02:52:07 localhost python3[35798]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:52:08 localhost python3[35818]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:52:08 localhost python3[35874]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:52:09 localhost python3[35917]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764661928.6526706-65484-123595612237668/source _original_basename=tmps7tvav1t follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:09 localhost python3[35947]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:10 localhost python3[35963]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:10 localhost python3[35979]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:11 localhost python3[35995]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:11 localhost python3[36009]: ansible-ping Invoked with data=pong Dec 2 02:52:22 localhost sshd[36010]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:22 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Dec 2 02:52:22 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 02:52:22 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 02:52:22 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 02:52:22 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 02:52:22 localhost systemd[36014]: Queued start job for default target Main User Target. Dec 2 02:52:22 localhost systemd[36014]: Created slice User Application Slice. Dec 2 02:52:22 localhost systemd[36014]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 02:52:22 localhost systemd[36014]: Started Daily Cleanup of User's Temporary Directories. Dec 2 02:52:22 localhost systemd[36014]: Reached target Paths. Dec 2 02:52:22 localhost systemd[36014]: Reached target Timers. Dec 2 02:52:22 localhost systemd[36014]: Starting D-Bus User Message Bus Socket... Dec 2 02:52:22 localhost systemd[36014]: Starting Create User's Volatile Files and Directories... Dec 2 02:52:22 localhost systemd[36014]: Listening on D-Bus User Message Bus Socket. Dec 2 02:52:22 localhost systemd[36014]: Reached target Sockets. Dec 2 02:52:22 localhost systemd[36014]: Finished Create User's Volatile Files and Directories. Dec 2 02:52:22 localhost systemd[36014]: Reached target Basic System. Dec 2 02:52:22 localhost systemd[36014]: Reached target Main User Target. Dec 2 02:52:22 localhost systemd[36014]: Startup finished in 136ms. Dec 2 02:52:22 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 02:52:22 localhost systemd[1]: Started Session 28 of User tripleo-admin. Dec 2 02:52:23 localhost python3[36075]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:52:24 localhost sshd[36080]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:28 localhost python3[36097]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Dec 2 02:52:29 localhost python3[36113]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 2 02:52:29 localhost python3[36161]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.yjfp0a4itmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:30 localhost python3[36191]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.yjfp0a4itmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:31 localhost python3[36207]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.yjfp0a4itmphosts insertbefore=BOF block=172.17.0.106 np0005541912.localdomain np0005541912#012172.18.0.106 np0005541912.storage.localdomain np0005541912.storage#012172.20.0.106 np0005541912.storagemgmt.localdomain np0005541912.storagemgmt#012172.17.0.106 np0005541912.internalapi.localdomain np0005541912.internalapi#012172.19.0.106 np0005541912.tenant.localdomain np0005541912.tenant#012192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane#012172.17.0.107 np0005541913.localdomain np0005541913#012172.18.0.107 np0005541913.storage.localdomain np0005541913.storage#012172.20.0.107 np0005541913.storagemgmt.localdomain np0005541913.storagemgmt#012172.17.0.107 np0005541913.internalapi.localdomain np0005541913.internalapi#012172.19.0.107 np0005541913.tenant.localdomain np0005541913.tenant#012192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane#012172.17.0.108 np0005541914.localdomain np0005541914#012172.18.0.108 np0005541914.storage.localdomain np0005541914.storage#012172.20.0.108 np0005541914.storagemgmt.localdomain np0005541914.storagemgmt#012172.17.0.108 np0005541914.internalapi.localdomain np0005541914.internalapi#012172.19.0.108 np0005541914.tenant.localdomain np0005541914.tenant#012192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane#012172.17.0.103 np0005541909.localdomain np0005541909#012172.18.0.103 np0005541909.storage.localdomain np0005541909.storage#012172.20.0.103 np0005541909.storagemgmt.localdomain np0005541909.storagemgmt#012172.17.0.103 np0005541909.internalapi.localdomain np0005541909.internalapi#012172.19.0.103 np0005541909.tenant.localdomain np0005541909.tenant#012192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane#012172.17.0.104 np0005541910.localdomain np0005541910#012172.18.0.104 np0005541910.storage.localdomain np0005541910.storage#012172.20.0.104 np0005541910.storagemgmt.localdomain np0005541910.storagemgmt#012172.17.0.104 np0005541910.internalapi.localdomain np0005541910.internalapi#012172.19.0.104 np0005541910.tenant.localdomain np0005541910.tenant#012192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane#012172.17.0.105 np0005541911.localdomain np0005541911#012172.18.0.105 np0005541911.storage.localdomain np0005541911.storage#012172.20.0.105 np0005541911.storagemgmt.localdomain np0005541911.storagemgmt#012172.17.0.105 np0005541911.internalapi.localdomain np0005541911.internalapi#012172.19.0.105 np0005541911.tenant.localdomain np0005541911.tenant#012192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.121 overcloud.storage.localdomain#012172.20.0.222 overcloud.storagemgmt.localdomain#012172.17.0.136 overcloud.internalapi.localdomain#012172.21.0.241 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:31 localhost python3[36223]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.yjfp0a4itmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:32 localhost python3[36240]: ansible-file Invoked with path=/tmp/ansible.yjfp0a4itmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:33 localhost python3[36256]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:34 localhost python3[36273]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:52:35 localhost sshd[36275]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:38 localhost python3[36294]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:39 localhost python3[36311]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:52:53 localhost sshd[36550]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:53:48 localhost kernel: SELinux: Converting 2699 SID table entries... Dec 2 02:53:48 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:53:48 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:53:48 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:53:48 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:53:48 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:53:48 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:53:48 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:53:48 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Dec 2 02:53:48 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:53:48 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:53:48 localhost systemd[1]: Reloading. Dec 2 02:53:49 localhost systemd-rc-local-generator[37249]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:53:49 localhost systemd-sysv-generator[37256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:53:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:53:49 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:53:49 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:53:49 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:53:49 localhost systemd[1]: run-ra17f050e829d4feb884d043e6e127c91.service: Deactivated successfully. Dec 2 02:53:49 localhost sshd[37676]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:53:55 localhost python3[37693]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:53:57 localhost python3[37832]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:53:57 localhost systemd[1]: Reloading. Dec 2 02:53:57 localhost systemd-rc-local-generator[37856]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:53:57 localhost systemd-sysv-generator[37861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:53:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:53:57 localhost python3[37886]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:53:58 localhost python3[37902]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:53:59 localhost python3[37919]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 02:54:01 localhost python3[37937]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:01 localhost python3[37955]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:02 localhost python3[37973]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:54:02 localhost systemd[1]: Reloading Network Manager... Dec 2 02:54:02 localhost NetworkManager[5967]: [1764662042.4941] audit: op="reload" arg="0" pid=37976 uid=0 result="success" Dec 2 02:54:02 localhost NetworkManager[5967]: [1764662042.4957] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Dec 2 02:54:02 localhost NetworkManager[5967]: [1764662042.4958] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Dec 2 02:54:02 localhost systemd[1]: Reloaded Network Manager. Dec 2 02:54:02 localhost python3[37992]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:04 localhost python3[38009]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:04 localhost python3[38027]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:05 localhost python3[38043]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:06 localhost python3[38059]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 2 02:54:06 localhost python3[38075]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:07 localhost python3[38091]: ansible-blockinfile Invoked with path=/tmp/ansible.fwl50lyf block=[192.168.122.106]*,[np0005541912.ctlplane.localdomain]*,[172.17.0.106]*,[np0005541912.internalapi.localdomain]*,[172.18.0.106]*,[np0005541912.storage.localdomain]*,[172.20.0.106]*,[np0005541912.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005541912.tenant.localdomain]*,[np0005541912.localdomain]*,[np0005541912]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=#012[192.168.122.107]*,[np0005541913.ctlplane.localdomain]*,[172.17.0.107]*,[np0005541913.internalapi.localdomain]*,[172.18.0.107]*,[np0005541913.storage.localdomain]*,[172.20.0.107]*,[np0005541913.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005541913.tenant.localdomain]*,[np0005541913.localdomain]*,[np0005541913]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=#012[192.168.122.108]*,[np0005541914.ctlplane.localdomain]*,[172.17.0.108]*,[np0005541914.internalapi.localdomain]*,[172.18.0.108]*,[np0005541914.storage.localdomain]*,[172.20.0.108]*,[np0005541914.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005541914.tenant.localdomain]*,[np0005541914.localdomain]*,[np0005541914]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=#012[192.168.122.103]*,[np0005541909.ctlplane.localdomain]*,[172.17.0.103]*,[np0005541909.internalapi.localdomain]*,[172.18.0.103]*,[np0005541909.storage.localdomain]*,[172.20.0.103]*,[np0005541909.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005541909.tenant.localdomain]*,[np0005541909.localdomain]*,[np0005541909]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=#012[192.168.122.104]*,[np0005541910.ctlplane.localdomain]*,[172.17.0.104]*,[np0005541910.internalapi.localdomain]*,[172.18.0.104]*,[np0005541910.storage.localdomain]*,[172.20.0.104]*,[np0005541910.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005541910.tenant.localdomain]*,[np0005541910.localdomain]*,[np0005541910]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=#012[192.168.122.105]*,[np0005541911.ctlplane.localdomain]*,[172.17.0.105]*,[np0005541911.internalapi.localdomain]*,[172.18.0.105]*,[np0005541911.storage.localdomain]*,[172.20.0.105]*,[np0005541911.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005541911.tenant.localdomain]*,[np0005541911.localdomain]*,[np0005541911]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:07 localhost python3[38107]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fwl50lyf' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:08 localhost python3[38125]: ansible-file Invoked with path=/tmp/ansible.fwl50lyf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:09 localhost python3[38141]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 02:54:09 localhost python3[38157]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:09 localhost python3[38175]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:10 localhost python3[38194]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Dec 2 02:54:10 localhost sshd[38258]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:54:12 localhost python3[38333]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:13 localhost python3[38350]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:54:16 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:54:16 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:54:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:54:16 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:54:16 localhost systemd[1]: Reloading. Dec 2 02:54:16 localhost systemd-rc-local-generator[38434]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:54:16 localhost systemd-sysv-generator[38438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:54:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:54:16 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:54:16 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 02:54:16 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 02:54:16 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 02:54:16 localhost systemd[1]: tuned.service: Consumed 1.808s CPU time. Dec 2 02:54:16 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 02:54:16 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:54:16 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:54:16 localhost systemd[1]: run-r852279eb83bb4b36b98357f88d1d8272.service: Deactivated successfully. Dec 2 02:54:18 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 02:54:18 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:54:18 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:54:18 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:54:18 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:54:18 localhost systemd[1]: run-rcdbec0c656584f509556249d1b2503d6.service: Deactivated successfully. Dec 2 02:54:19 localhost python3[38786]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:54:19 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 02:54:19 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 02:54:19 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 02:54:19 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 02:54:20 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 02:54:21 localhost python3[38981]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:22 localhost python3[38998]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 2 02:54:22 localhost python3[39014]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:23 localhost python3[39030]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:24 localhost python3[39050]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:25 localhost python3[39067]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:28 localhost python3[39083]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:29 localhost sshd[39084]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:54:33 localhost python3[39101]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:33 localhost python3[39149]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:34 localhost python3[39194]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662073.3062372-70033-61177823672067/source _original_basename=tmp12icfplw follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:34 localhost python3[39224]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:35 localhost python3[39272]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:35 localhost python3[39315]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662074.8532314-70129-183458166602365/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=303a9e8dd06eeb9157c66bb31355109aa4c872ae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:36 localhost python3[39377]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:36 localhost python3[39420]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662075.731821-70407-239080182332621/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=da1c3b8584bf2231cac158ee0d91c3ea69fbb742 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:36 localhost python3[39482]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:37 localhost python3[39525]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662076.6405096-70407-50174873424137/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=cefd5bd69caea640bd56356af0b9c6878752d6a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:37 localhost python3[39587]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:38 localhost python3[39630]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662077.5180104-70407-254312378040493/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:38 localhost python3[39692]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:39 localhost python3[39735]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662078.5209446-70407-215929840763969/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:39 localhost python3[39797]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:40 localhost python3[39840]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662079.39071-70407-220704440541338/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=1c471308e2382b36016a260bb9c9a72bc4f65120 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:40 localhost python3[39902]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:40 localhost python3[39945]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662080.2945921-70407-124902819074386/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:41 localhost python3[40007]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:41 localhost python3[40050]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662081.1493134-70407-226169721851505/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=c605747c28ed219c21bc7a334ba3c66112b9a2b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:42 localhost python3[40112]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:42 localhost python3[40169]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662082.0732875-70407-179504042793434/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:43 localhost python3[40257]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:43 localhost python3[40321]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662082.9530525-70407-151862547607562/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:44 localhost python3[40397]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:44 localhost python3[40441]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662083.8863442-70407-176185455576848/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=ace0296e780adbfa11d93013fc1b670cc14ab7b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:45 localhost python3[40471]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:45 localhost python3[40519]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:46 localhost python3[40562]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662085.554342-71051-146365039118940/source _original_basename=tmp2kf8xn4b follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:50 localhost python3[40592]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 02:54:51 localhost python3[40653]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:54 localhost systemd[36014]: Starting Mark boot as successful... Dec 2 02:54:54 localhost systemd[36014]: Finished Mark boot as successful. Dec 2 02:54:55 localhost python3[40671]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:00 localhost python3[40688]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:01 localhost python3[40711]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:05 localhost sshd[40715]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:55:05 localhost python3[40732]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:06 localhost python3[40755]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:10 localhost python3[40772]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:15 localhost python3[40789]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:15 localhost python3[40812]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:20 localhost python3[40829]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:24 localhost python3[40846]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:24 localhost python3[40869]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:26 localhost sshd[40871]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:55:29 localhost python3[40888]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:33 localhost python3[40905]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:34 localhost python3[40928]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:38 localhost python3[40945]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:45 localhost python3[41023]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:46 localhost python3[41071]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:46 localhost python3[41089]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp6k5u_4dg recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:47 localhost python3[41119]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:47 localhost python3[41182]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:48 localhost python3[41200]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:48 localhost python3[41262]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:48 localhost python3[41280]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:49 localhost python3[41342]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:49 localhost python3[41360]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:50 localhost python3[41422]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:50 localhost python3[41440]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:50 localhost python3[41502]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:51 localhost python3[41520]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:51 localhost python3[41582]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:51 localhost python3[41600]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:52 localhost python3[41662]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:52 localhost python3[41680]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:53 localhost python3[41742]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:53 localhost python3[41760]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:53 localhost python3[41822]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:54 localhost python3[41840]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:54 localhost python3[41902]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:54 localhost python3[41920]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:55 localhost python3[41982]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:55 localhost python3[42000]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:56 localhost python3[42030]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:55:56 localhost python3[42078]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:57 localhost python3[42096]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpvltr6rr2 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:00 localhost python3[42126]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:04 localhost python3[42143]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost python3[42161]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost python3[42179]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost systemd[1]: Reloading. Dec 2 02:56:06 localhost systemd-rc-local-generator[42205]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:56:06 localhost systemd-sysv-generator[42211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:56:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:56:06 localhost systemd[1]: Starting Netfilter Tables... Dec 2 02:56:06 localhost systemd[1]: Finished Netfilter Tables. Dec 2 02:56:07 localhost python3[42269]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:07 localhost python3[42312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662166.8353975-73763-261211308044903/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:07 localhost python3[42342]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:08 localhost python3[42360]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:08 localhost python3[42409]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:09 localhost python3[42452]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662168.6203887-74106-225751472549208/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:09 localhost python3[42514]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:10 localhost python3[42557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662169.5707421-74163-57053949448765/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:10 localhost python3[42619]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:11 localhost python3[42662]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662170.516338-74218-116552651574724/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:11 localhost python3[42724]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:12 localhost python3[42767]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662171.4040427-74280-249049247323966/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:12 localhost python3[42829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:13 localhost python3[42872]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662172.277965-74320-21086605098136/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:13 localhost python3[42902]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:14 localhost python3[42967]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:14 localhost python3[42984]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:15 localhost python3[43001]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:15 localhost python3[43020]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:16 localhost python3[43036]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:16 localhost python3[43052]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:16 localhost python3[43068]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 02:56:17 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Dec 2 02:56:18 localhost python3[43088]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:18 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 2 02:56:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:18 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Dec 2 02:56:19 localhost python3[43109]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:19 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 2 02:56:19 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:19 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:19 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:19 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:19 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:19 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:19 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:20 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Dec 2 02:56:20 localhost python3[43130]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:21 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 2 02:56:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:21 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Dec 2 02:56:21 localhost python3[43151]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost python3[43167]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost python3[43183]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost sshd[43198]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:56:22 localhost python3[43201]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:56:23 localhost python3[43217]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:24 localhost python3[43234]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:25 localhost sshd[43236]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:56:27 localhost python3[43253]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:28 localhost python3[43301]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:28 localhost python3[43344]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662188.167698-75157-160996103504889/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:29 localhost python3[43374]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:56:29 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 02:56:29 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 02:56:29 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 02:56:29 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 02:56:29 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 2 02:56:29 localhost systemd-modules-load[43377]: Inserted module 'br_netfilter' Dec 2 02:56:29 localhost kernel: Bridge firewalling registered Dec 2 02:56:29 localhost systemd-modules-load[43377]: Module 'msr' is built in Dec 2 02:56:29 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 02:56:29 localhost python3[43428]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:30 localhost python3[43471]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662189.5857527-75223-53451697338259/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:30 localhost python3[43501]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43518]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43536]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43554]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43571]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43588]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43605]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43623]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43641]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43659]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43677]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43695]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43713]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43731]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:36 localhost python3[43748]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:36 localhost python3[43765]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:36 localhost python3[43782]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:37 localhost python3[43799]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:37 localhost python3[43817]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:56:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 02:56:37 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 02:56:37 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 2 02:56:37 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 02:56:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 02:56:37 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 02:56:37 localhost python3[43837]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:38 localhost python3[43853]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:38 localhost python3[43869]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:38 localhost python3[43885]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:56:39 localhost python3[43901]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:39 localhost python3[43917]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:39 localhost python3[43933]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:40 localhost python3[43949]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:40 localhost python3[43965]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:40 localhost python3[44013]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:41 localhost python3[44056]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662200.5573525-75723-51736172754302/source _original_basename=tmpbfbm_gjy follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:41 localhost python3[44086]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:43 localhost python3[44103]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:43 localhost python3[44151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:44 localhost python3[44194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662203.7394264-75924-52890893692572/source _original_basename=tmppwqg972p follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:44 localhost python3[44224]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44240]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44256]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44272]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:46 localhost python3[44288]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:46 localhost python3[44304]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:46 localhost python3[44320]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:46 localhost python3[44336]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:47 localhost python3[44352]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:47 localhost sshd[44375]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:56:47 localhost python3[44400]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Dec 2 02:56:48 localhost python3[44442]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:56:48 localhost python3[44497]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Dec 2 02:56:48 localhost python3[44544]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:49 localhost python3[44593]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:49 localhost python3[44651]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662209.1537104-76193-38631552878565/source _original_basename=tmpfmlija7_ follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:50 localhost python3[44681]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 2 02:56:51 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Dec 2 02:56:51 localhost python3[44702]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:51 localhost python3[44718]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:51 localhost python3[44734]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Dec 2 02:56:53 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Dec 2 02:56:53 localhost python3[44754]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:56 localhost python3[44771]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 02:56:57 localhost python3[44832]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:57 localhost python3[44848]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:57 localhost python3[44907]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:58 localhost python3[44950]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662217.5876255-76516-27656382419816/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=b4295d801cd8c23bfe072937c7f9f133ab6cb946 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:58 localhost python3[45012]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:59 localhost python3[45057]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662218.5267909-76590-236212836960340/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:59 localhost python3[45087]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:59 localhost python3[45103]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:00 localhost python3[45119]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:00 localhost python3[45135]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:01 localhost python3[45183]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:01 localhost python3[45226]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662220.9355533-76763-236914699623339/source _original_basename=tmpx9_y_bke follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:02 localhost python3[45256]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:02 localhost python3[45272]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:02 localhost python3[45288]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:57:06 localhost python3[45337]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:06 localhost python3[45382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662226.1143267-76952-181373760118912/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:07 localhost python3[45413]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:07 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 02:57:07 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 02:57:07 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 02:57:07 localhost systemd[1]: sshd.service: Consumed 3.503s CPU time, read 2.6M from disk, written 144.0K to disk. Dec 2 02:57:07 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 02:57:07 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 02:57:07 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:07 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:07 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:07 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 02:57:07 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 02:57:07 localhost sshd[45417]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:57:07 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 02:57:07 localhost python3[45433]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:08 localhost python3[45451]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:09 localhost python3[45469]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3399 writes, 16K keys, 3399 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3399 writes, 202 syncs, 16.83 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3399 writes, 16K keys, 3399 commit groups, 1.0 writes per commit group, ingest: 15.32 MB, 0.03 MB/s#012Interval WAL: 3399 writes, 202 syncs, 16.83 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 02:57:12 localhost python3[45518]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:12 localhost python3[45536]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:13 localhost python3[45566]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:14 localhost python3[45616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:14 localhost python3[45634]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:14 localhost python3[45664]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:14 localhost systemd[1]: Reloading. Dec 2 02:57:15 localhost systemd-sysv-generator[45690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:57:15 localhost systemd-rc-local-generator[45686]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:57:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:57:15 localhost systemd[1]: Starting chronyd online sources service... Dec 2 02:57:15 localhost chronyc[45704]: 200 OK Dec 2 02:57:15 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 2 02:57:15 localhost systemd[1]: Finished chronyd online sources service. Dec 2 02:57:15 localhost python3[45720]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:15 localhost chronyd[26062]: System clock was stepped by -0.000097 seconds Dec 2 02:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 3251 writes, 16K keys, 3251 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3251 writes, 141 syncs, 23.06 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3251 writes, 16K keys, 3251 commit groups, 1.0 writes per commit group, ingest: 14.66 MB, 0.02 MB/s#012Interval WAL: 3251 writes, 141 syncs, 23.06 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 02:57:16 localhost python3[45737]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:16 localhost python3[45754]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:16 localhost chronyd[26062]: System clock was stepped by 0.000000 seconds Dec 2 02:57:16 localhost python3[45771]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:17 localhost python3[45788]: ansible-timezone Invoked with name=UTC hwclock=None Dec 2 02:57:17 localhost systemd[1]: Starting Time & Date Service... Dec 2 02:57:17 localhost systemd[1]: Started Time & Date Service. Dec 2 02:57:18 localhost python3[45808]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:19 localhost python3[45825]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:19 localhost python3[45842]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 2 02:57:19 localhost python3[45858]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:57:20 localhost python3[45874]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:21 localhost python3[45890]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:21 localhost python3[45938]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:21 localhost python3[45981]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662241.2627783-77976-200947271492114/source _original_basename=tmp40hvdeir follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:22 localhost python3[46043]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:22 localhost python3[46086]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662242.0980065-78082-234266542714096/source _original_basename=tmps5z0znie follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:23 localhost python3[46116]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 02:57:23 localhost systemd[1]: Reloading. Dec 2 02:57:23 localhost systemd-rc-local-generator[46142]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:57:23 localhost systemd-sysv-generator[46146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:57:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:57:23 localhost python3[46170]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:24 localhost python3[46186]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:24 localhost systemd[36014]: Created slice User Background Tasks Slice. Dec 2 02:57:24 localhost systemd[36014]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 02:57:24 localhost systemd[36014]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 02:57:24 localhost python3[46204]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:24 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Dec 2 02:57:24 localhost python3[46221]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:25 localhost python3[46237]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:27 localhost python3[46285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:27 localhost python3[46328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662245.5488522-78271-209774938345307/source _original_basename=tmpfe2lbpj6 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:37 localhost sshd[46343]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:57:47 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 02:57:49 localhost python3[46392]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:50 localhost python3[46425]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Dec 2 02:57:50 localhost python3[46455]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:50 localhost python3[46471]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:51 localhost python3[46487]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:51 localhost python3[46503]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:57:52 localhost kernel: SELinux: Converting 2706 SID table entries... Dec 2 02:57:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:57:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:57:52 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Dec 2 02:57:52 localhost python3[46540]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:54 localhost python3[46677]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Dec 2 02:57:54 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Dec 2 02:57:55 localhost python3[46693]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:55 localhost python3[46709]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:56 localhost python3[46725]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Dec 2 02:58:00 localhost python3[46773]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:58:01 localhost python3[46816]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662280.5079954-79790-158554665366788/source _original_basename=tmpgtlg0taw follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:58:01 localhost python3[46846]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:58:03 localhost python3[46969]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:58:05 localhost python3[47090]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 02:58:07 localhost python3[47106]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:58:08 localhost python3[47123]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:58:12 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost dbus-broker-launch[14516]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 2 02:58:12 localhost dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 2 02:58:12 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost systemd[1]: Reexecuting. Dec 2 02:58:12 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 02:58:12 localhost systemd[1]: Detected virtualization kvm. Dec 2 02:58:12 localhost systemd[1]: Detected architecture x86-64. Dec 2 02:58:12 localhost systemd-sysv-generator[47182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:12 localhost sshd[47162]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:58:12 localhost systemd-rc-local-generator[47178]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:21 localhost kernel: SELinux: Converting 2706 SID table entries... Dec 2 02:58:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:58:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:58:21 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:58:21 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Dec 2 02:58:21 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 02:58:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:58:22 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:58:22 localhost systemd[1]: Reloading. Dec 2 02:58:22 localhost systemd-sysv-generator[47286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:22 localhost systemd-rc-local-generator[47281]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:58:22 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:58:22 localhost systemd-journald[619]: Journal stopped Dec 2 02:58:22 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd). Dec 2 02:58:22 localhost systemd[1]: Stopping Journal Service... Dec 2 02:58:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 2 02:58:22 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 2 02:58:22 localhost systemd[1]: Stopped Journal Service. Dec 2 02:58:22 localhost systemd[1]: systemd-journald.service: Consumed 1.854s CPU time. Dec 2 02:58:22 localhost systemd[1]: Starting Journal Service... Dec 2 02:58:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 2 02:58:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 2 02:58:22 localhost systemd[1]: systemd-udevd.service: Consumed 3.127s CPU time. Dec 2 02:58:22 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 02:58:22 localhost systemd-journald[47679]: Journal started Dec 2 02:58:22 localhost systemd-journald[47679]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 12.1M, max 314.7M, 302.5M free. Dec 2 02:58:22 localhost systemd[1]: Started Journal Service. Dec 2 02:58:22 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 2 02:58:22 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:58:22 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:58:22 localhost systemd-udevd[47688]: Using default interface naming scheme 'rhel-9.0'. Dec 2 02:58:22 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 02:58:22 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:58:22 localhost systemd[1]: Reloading. Dec 2 02:58:23 localhost systemd-sysv-generator[48309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:23 localhost systemd-rc-local-generator[48305]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:23 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:58:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:58:23 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:58:23 localhost systemd[1]: man-db-cache-update.service: Consumed 1.109s CPU time. Dec 2 02:58:23 localhost systemd[1]: run-rfabf144092414b60bcb1134c36603e1a.service: Deactivated successfully. Dec 2 02:58:23 localhost systemd[1]: run-re656b0bcf7e24cb19d18946d33784dcb.service: Deactivated successfully. Dec 2 02:58:25 localhost python3[48621]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Dec 2 02:58:25 localhost python3[48640]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:58:26 localhost sshd[48659]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:58:26 localhost python3[48658]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:26 localhost python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Dec 2 02:58:26 localhost python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Dec 2 02:58:33 localhost podman[48672]: 2025-12-02 07:58:26.558062488 +0000 UTC m=+0.022656192 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 02:58:33 localhost python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Dec 2 02:58:34 localhost python3[48772]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:34 localhost python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Dec 2 02:58:34 localhost python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Dec 2 02:58:41 localhost podman[48786]: 2025-12-02 07:58:34.105731094 +0000 UTC m=+0.029671685 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 02:58:41 localhost python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Dec 2 02:58:41 localhost python3[48889]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:41 localhost python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Dec 2 02:58:41 localhost python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Dec 2 02:59:00 localhost podman[48903]: 2025-12-02 07:58:41.876652619 +0000 UTC m=+0.040897950 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 02:59:00 localhost python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Dec 2 02:59:00 localhost systemd[1]: tmp-crun.DHJ7ep.mount: Deactivated successfully. Dec 2 02:59:00 localhost podman[49572]: 2025-12-02 07:59:00.771574367 +0000 UTC m=+0.087931690 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Dec 2 02:59:00 localhost python3[49571]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:00 localhost python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Dec 2 02:59:00 localhost podman[49572]: 2025-12-02 07:59:00.864827904 +0000 UTC m=+0.181185217 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, ceph=True, release=1763362218, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public) Dec 2 02:59:00 localhost python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Dec 2 02:59:01 localhost sshd[49693]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:59:15 localhost podman[49604]: 2025-12-02 07:59:00.96702657 +0000 UTC m=+0.089823150 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 02:59:15 localhost python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Dec 2 02:59:15 localhost python3[49809]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:15 localhost python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Dec 2 02:59:15 localhost python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Dec 2 02:59:22 localhost systemd[1]: Starting dnf makecache... Dec 2 02:59:22 localhost dnf[50052]: Updating Subscription Management repositories. Dec 2 02:59:22 localhost podman[49822]: 2025-12-02 07:59:15.916762845 +0000 UTC m=+0.046605658 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 02:59:22 localhost python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Dec 2 02:59:23 localhost python3[50079]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:23 localhost python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Dec 2 02:59:23 localhost python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Dec 2 02:59:24 localhost dnf[50052]: Failed determining last makecache time. Dec 2 02:59:24 localhost dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 26 kB/s | 4.5 kB 00:00 Dec 2 02:59:24 localhost dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - High Av 30 kB/s | 4.0 kB 00:00 Dec 2 02:59:24 localhost dnf[50052]: Fast Datapath for RHEL 9 x86_64 (RPMs) 29 kB/s | 4.0 kB 00:00 Dec 2 02:59:24 localhost dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 27 kB/s | 4.1 kB 00:00 Dec 2 02:59:25 localhost dnf[50052]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 30 kB/s | 4.0 kB 00:00 Dec 2 02:59:25 localhost dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 33 kB/s | 4.5 kB 00:00 Dec 2 02:59:25 localhost dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 30 kB/s | 4.1 kB 00:00 Dec 2 02:59:25 localhost dnf[50052]: Metadata cache created. Dec 2 02:59:25 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 02:59:25 localhost systemd[1]: Finished dnf makecache. Dec 2 02:59:25 localhost systemd[1]: dnf-makecache.service: Consumed 2.938s CPU time. Dec 2 02:59:28 localhost podman[50091]: 2025-12-02 07:59:23.350436381 +0000 UTC m=+0.043809212 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 02:59:28 localhost python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Dec 2 02:59:28 localhost python3[50176]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:28 localhost python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Dec 2 02:59:28 localhost python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Dec 2 02:59:30 localhost podman[50188]: 2025-12-02 07:59:28.54920576 +0000 UTC m=+0.044221574 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 02:59:30 localhost python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Dec 2 02:59:31 localhost python3[50267]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:31 localhost python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Dec 2 02:59:31 localhost python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Dec 2 02:59:33 localhost podman[50281]: 2025-12-02 07:59:31.329594125 +0000 UTC m=+0.036967996 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 02:59:33 localhost python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Dec 2 02:59:33 localhost python3[50359]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:33 localhost python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Dec 2 02:59:34 localhost python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Dec 2 02:59:36 localhost sshd[50410]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:59:37 localhost podman[50372]: 2025-12-02 07:59:34.099860845 +0000 UTC m=+0.042479159 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 02:59:37 localhost python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Dec 2 02:59:37 localhost python3[50451]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:37 localhost python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Dec 2 02:59:37 localhost python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Dec 2 02:59:41 localhost podman[50465]: 2025-12-02 07:59:37.867665822 +0000 UTC m=+0.046539517 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 02:59:41 localhost python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Dec 2 02:59:42 localhost python3[50555]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:42 localhost python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Dec 2 02:59:42 localhost python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Dec 2 02:59:44 localhost podman[50569]: 2025-12-02 07:59:42.174992541 +0000 UTC m=+0.047939511 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 02:59:44 localhost python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Dec 2 02:59:45 localhost python3[50648]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:59:46 localhost ansible-async_wrapper.py[50820]: Invoked with 356590835224 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662386.2252982-82653-151212817962803/AnsiballZ_command.py _ Dec 2 02:59:46 localhost ansible-async_wrapper.py[50823]: Starting module and watcher Dec 2 02:59:46 localhost ansible-async_wrapper.py[50823]: Start watching 50824 (3600) Dec 2 02:59:46 localhost ansible-async_wrapper.py[50824]: Start module (50824) Dec 2 02:59:46 localhost ansible-async_wrapper.py[50820]: Return async_wrapper task started. Dec 2 02:59:47 localhost python3[50844]: ansible-ansible.legacy.async_status Invoked with jid=356590835224.50820 mode=status _async_dir=/tmp/.ansible_async Dec 2 02:59:51 localhost puppet-user[50828]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 02:59:51 localhost puppet-user[50828]: (file: /etc/puppet/hiera.yaml) Dec 2 02:59:51 localhost puppet-user[50828]: Warning: Undefined variable '::deploy_config_name'; Dec 2 02:59:51 localhost puppet-user[50828]: (file & line not available) Dec 2 02:59:51 localhost puppet-user[50828]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 02:59:51 localhost puppet-user[50828]: (file & line not available) Dec 2 02:59:51 localhost puppet-user[50828]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 02:59:51 localhost puppet-user[50828]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 02:59:51 localhost puppet-user[50828]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.13 seconds Dec 2 02:59:51 localhost puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Dec 2 02:59:51 localhost puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Dec 2 02:59:51 localhost puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Dec 2 02:59:51 localhost puppet-user[50828]: Notice: Applied catalog in 0.06 seconds Dec 2 02:59:51 localhost puppet-user[50828]: Application: Dec 2 02:59:51 localhost puppet-user[50828]: Initial environment: production Dec 2 02:59:51 localhost puppet-user[50828]: Converged environment: production Dec 2 02:59:51 localhost puppet-user[50828]: Run mode: user Dec 2 02:59:51 localhost puppet-user[50828]: Changes: Dec 2 02:59:51 localhost puppet-user[50828]: Total: 3 Dec 2 02:59:51 localhost puppet-user[50828]: Events: Dec 2 02:59:51 localhost puppet-user[50828]: Success: 3 Dec 2 02:59:51 localhost puppet-user[50828]: Total: 3 Dec 2 02:59:51 localhost puppet-user[50828]: Resources: Dec 2 02:59:51 localhost puppet-user[50828]: Changed: 3 Dec 2 02:59:51 localhost puppet-user[50828]: Out of sync: 3 Dec 2 02:59:51 localhost puppet-user[50828]: Total: 10 Dec 2 02:59:51 localhost puppet-user[50828]: Time: Dec 2 02:59:51 localhost puppet-user[50828]: Schedule: 0.00 Dec 2 02:59:51 localhost puppet-user[50828]: File: 0.00 Dec 2 02:59:51 localhost puppet-user[50828]: Exec: 0.02 Dec 2 02:59:51 localhost puppet-user[50828]: Augeas: 0.03 Dec 2 02:59:51 localhost puppet-user[50828]: Transaction evaluation: 0.06 Dec 2 02:59:51 localhost puppet-user[50828]: Catalog application: 0.06 Dec 2 02:59:51 localhost puppet-user[50828]: Config retrieval: 0.16 Dec 2 02:59:51 localhost puppet-user[50828]: Last run: 1764662391 Dec 2 02:59:51 localhost puppet-user[50828]: Filebucket: 0.00 Dec 2 02:59:51 localhost puppet-user[50828]: Total: 0.06 Dec 2 02:59:51 localhost puppet-user[50828]: Version: Dec 2 02:59:51 localhost puppet-user[50828]: Config: 1764662391 Dec 2 02:59:51 localhost puppet-user[50828]: Puppet: 7.10.0 Dec 2 02:59:51 localhost ansible-async_wrapper.py[50824]: Module complete (50824) Dec 2 02:59:51 localhost ansible-async_wrapper.py[50823]: Done in kid B. Dec 2 02:59:57 localhost python3[50971]: ansible-ansible.legacy.async_status Invoked with jid=356590835224.50820 mode=status _async_dir=/tmp/.ansible_async Dec 2 02:59:58 localhost python3[50987]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:59:59 localhost python3[51046]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:59:59 localhost python3[51094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:00 localhost python3[51137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662399.2834432-82882-10240848673277/source _original_basename=tmpj27hrm6x follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:00:00 localhost python3[51167]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:01 localhost python3[51270]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:00:02 localhost python3[51289]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 03:00:02 localhost python3[51305]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005541914 step=1 update_config_hash_only=False Dec 2 03:00:02 localhost python3[51321]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:03 localhost python3[51337]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:00:03 localhost python3[51383]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 03:00:04 localhost python3[51455]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:00:05 localhost podman[51625]: 2025-12-02 08:00:04.969612082 +0000 UTC m=+0.040606411 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:00:05 localhost podman[51644]: 2025-12-02 08:00:04.976731745 +0000 UTC m=+0.029799580 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:00:05 localhost podman[51625]: 2025-12-02 08:00:05.118540244 +0000 UTC m=+0.189534633 container create e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:00:05 localhost podman[51644]: 2025-12-02 08:00:05.178170772 +0000 UTC m=+0.231238587 container create 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, container_name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:00:05 localhost podman[51632]: 2025-12-02 08:00:05.103999879 +0000 UTC m=+0.154726654 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:00:05 localhost podman[51669]: 2025-12-02 08:00:05.105895656 +0000 UTC m=+0.128310447 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:05 localhost podman[51669]: 2025-12-02 08:00:05.208359582 +0000 UTC m=+0.230774343 container create bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, architecture=x86_64) Dec 2 03:00:05 localhost podman[51663]: 2025-12-02 08:00:05.112879365 +0000 UTC m=+0.132047999 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope. Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope. Dec 2 03:00:05 localhost podman[51663]: 2025-12-02 08:00:05.249752505 +0000 UTC m=+0.268921169 container create 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_puppet_step1, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope. Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost podman[51625]: 2025-12-02 08:00:05.295178071 +0000 UTC m=+0.366172390 container init e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, config_id=tripleo_puppet_step1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., container_name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:05 localhost podman[51632]: 2025-12-02 08:00:05.301354955 +0000 UTC m=+0.352081760 container create 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1) Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope. Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:06 localhost podman[51644]: 2025-12-02 08:00:06.016922352 +0000 UTC m=+1.069990207 container init 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, tcib_managed=true, container_name=container-puppet-crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:00:06 localhost podman[51644]: 2025-12-02 08:00:06.453777268 +0000 UTC m=+1.506845113 container start 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., container_name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:06 localhost podman[51644]: 2025-12-02 08:00:06.456270852 +0000 UTC m=+1.509338747 container attach 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Dec 2 03:00:06 localhost podman[51663]: 2025-12-02 08:00:06.484305908 +0000 UTC m=+1.503474582 container init 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.openshift.expose-services=) Dec 2 03:00:06 localhost systemd[1]: tmp-crun.dKzOJv.mount: Deactivated successfully. Dec 2 03:00:06 localhost podman[51663]: 2025-12-02 08:00:06.504920852 +0000 UTC m=+1.524089496 container start 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team) Dec 2 03:00:06 localhost podman[51663]: 2025-12-02 08:00:06.505556962 +0000 UTC m=+1.524725666 container attach 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, container_name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z) Dec 2 03:00:06 localhost podman[51669]: 2025-12-02 08:00:06.515520978 +0000 UTC m=+1.537935729 container init bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:06 localhost podman[51625]: 2025-12-02 08:00:06.52058586 +0000 UTC m=+1.591580179 container start e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:00:06 localhost podman[51625]: 2025-12-02 08:00:06.520968771 +0000 UTC m=+1.591963140 container attach e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Dec 2 03:00:06 localhost podman[51669]: 2025-12-02 08:00:06.52967616 +0000 UTC m=+1.552090911 container start bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, managed_by=tripleo_ansible) Dec 2 03:00:06 localhost podman[51669]: 2025-12-02 08:00:06.530145754 +0000 UTC m=+1.552560505 container attach bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:00:06 localhost podman[51632]: 2025-12-02 08:00:06.533472114 +0000 UTC m=+1.584198909 container init 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:06 localhost podman[51632]: 2025-12-02 08:00:06.55077023 +0000 UTC m=+1.601497065 container start 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:35:22Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt) Dec 2 03:00:06 localhost podman[51632]: 2025-12-02 08:00:06.551141091 +0000 UTC m=+1.601867936 container attach 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team) Dec 2 03:00:08 localhost puppet-user[51784]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[51784]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[51784]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[51784]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51828]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[51828]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[51828]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[51828]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51784]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[51784]: (file & line not available) Dec 2 03:00:08 localhost ovs-vsctl[52068]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 2 03:00:08 localhost puppet-user[51828]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[51828]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51784]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.09 seconds Dec 2 03:00:08 localhost podman[51546]: 2025-12-02 08:00:04.881911867 +0000 UTC m=+0.030887491 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:08 localhost puppet-user[51828]: Notice: Accepting previously invalid value for target type 'Integer' Dec 2 03:00:08 localhost puppet-user[51782]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[51782]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[51782]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[51782]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51784]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Dec 2 03:00:08 localhost puppet-user[51784]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Dec 2 03:00:08 localhost puppet-user[51828]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.13 seconds Dec 2 03:00:08 localhost puppet-user[51782]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[51782]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51784]: Notice: Applied catalog in 0.04 seconds Dec 2 03:00:08 localhost puppet-user[51784]: Application: Dec 2 03:00:08 localhost puppet-user[51784]: Initial environment: production Dec 2 03:00:08 localhost puppet-user[51784]: Converged environment: production Dec 2 03:00:08 localhost puppet-user[51784]: Run mode: user Dec 2 03:00:08 localhost puppet-user[51784]: Changes: Dec 2 03:00:08 localhost puppet-user[51784]: Total: 2 Dec 2 03:00:08 localhost puppet-user[51784]: Events: Dec 2 03:00:08 localhost puppet-user[51784]: Success: 2 Dec 2 03:00:08 localhost puppet-user[51784]: Total: 2 Dec 2 03:00:08 localhost puppet-user[51784]: Resources: Dec 2 03:00:08 localhost puppet-user[51784]: Changed: 2 Dec 2 03:00:08 localhost puppet-user[51784]: Out of sync: 2 Dec 2 03:00:08 localhost puppet-user[51784]: Skipped: 7 Dec 2 03:00:08 localhost puppet-user[51784]: Total: 9 Dec 2 03:00:08 localhost puppet-user[51784]: Time: Dec 2 03:00:08 localhost puppet-user[51784]: File: 0.01 Dec 2 03:00:08 localhost puppet-user[51784]: Cron: 0.01 Dec 2 03:00:08 localhost puppet-user[51784]: Transaction evaluation: 0.04 Dec 2 03:00:08 localhost puppet-user[51784]: Catalog application: 0.04 Dec 2 03:00:08 localhost puppet-user[51784]: Config retrieval: 0.12 Dec 2 03:00:08 localhost puppet-user[51784]: Last run: 1764662408 Dec 2 03:00:08 localhost puppet-user[51784]: Total: 0.04 Dec 2 03:00:08 localhost puppet-user[51784]: Version: Dec 2 03:00:08 localhost puppet-user[51784]: Config: 1764662408 Dec 2 03:00:08 localhost puppet-user[51784]: Puppet: 7.10.0 Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}296a77cf0860ceaf3513703c18bbb7eb622db175df9af5d6bfe1bade3b73a54a' Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Dec 2 03:00:08 localhost puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Dec 2 03:00:08 localhost puppet-user[51828]: Notice: Applied catalog in 0.03 seconds Dec 2 03:00:08 localhost puppet-user[51828]: Application: Dec 2 03:00:08 localhost puppet-user[51828]: Initial environment: production Dec 2 03:00:08 localhost puppet-user[51828]: Converged environment: production Dec 2 03:00:08 localhost puppet-user[51828]: Run mode: user Dec 2 03:00:08 localhost puppet-user[51828]: Changes: Dec 2 03:00:08 localhost puppet-user[51828]: Total: 7 Dec 2 03:00:08 localhost puppet-user[51828]: Events: Dec 2 03:00:08 localhost puppet-user[51828]: Success: 7 Dec 2 03:00:08 localhost puppet-user[51828]: Total: 7 Dec 2 03:00:08 localhost puppet-user[51828]: Resources: Dec 2 03:00:08 localhost puppet-user[51828]: Skipped: 13 Dec 2 03:00:08 localhost puppet-user[51828]: Changed: 5 Dec 2 03:00:08 localhost puppet-user[51828]: Out of sync: 5 Dec 2 03:00:08 localhost puppet-user[51828]: Total: 20 Dec 2 03:00:08 localhost puppet-user[51828]: Time: Dec 2 03:00:08 localhost puppet-user[51828]: File: 0.01 Dec 2 03:00:08 localhost puppet-user[51828]: Transaction evaluation: 0.02 Dec 2 03:00:08 localhost puppet-user[51828]: Catalog application: 0.03 Dec 2 03:00:08 localhost puppet-user[51828]: Config retrieval: 0.16 Dec 2 03:00:08 localhost puppet-user[51828]: Last run: 1764662408 Dec 2 03:00:08 localhost puppet-user[51828]: Total: 0.03 Dec 2 03:00:08 localhost puppet-user[51828]: Version: Dec 2 03:00:08 localhost puppet-user[51828]: Config: 1764662408 Dec 2 03:00:08 localhost puppet-user[51828]: Puppet: 7.10.0 Dec 2 03:00:08 localhost puppet-user[51782]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.11 seconds Dec 2 03:00:08 localhost puppet-user[51811]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[51811]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[51811]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[51811]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51840]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[51840]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[51840]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[51840]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Dec 2 03:00:08 localhost puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Dec 2 03:00:08 localhost puppet-user[51811]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[51811]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51840]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[51840]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Dec 2 03:00:08 localhost podman[52224]: 2025-12-02 08:00:08.631927726 +0000 UTC m=+0.072340208 container create acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, com.redhat.component=openstack-ceilometer-central-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, distribution-scope=public, name=rhosp17/openstack-ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:00:08 localhost systemd[1]: Started libpod-conmon-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope. Dec 2 03:00:08 localhost systemd[1]: tmp-crun.LT1SPD.mount: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: Started libcrun container. Dec 2 03:00:08 localhost podman[52224]: 2025-12-02 08:00:08.599582701 +0000 UTC m=+0.039995173 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:08 localhost podman[52224]: 2025-12-02 08:00:08.711633042 +0000 UTC m=+0.152045524 container init acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:11:59Z, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, vcs-type=git, config_id=tripleo_puppet_step1) Dec 2 03:00:08 localhost podman[52224]: 2025-12-02 08:00:08.758779928 +0000 UTC m=+0.199192400 container start acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-central-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:11:59Z, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:08 localhost podman[52224]: 2025-12-02 08:00:08.759119448 +0000 UTC m=+0.199532010 container attach acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, container_name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public) Dec 2 03:00:08 localhost systemd[1]: libpod-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: libpod-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Consumed 2.199s CPU time. Dec 2 03:00:08 localhost puppet-user[51840]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Dec 2 03:00:08 localhost puppet-user[51840]: in a future release. Use nova::cinder::os_region_name instead Dec 2 03:00:08 localhost puppet-user[51840]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Dec 2 03:00:08 localhost puppet-user[51840]: in a future release. Use nova::cinder::catalog_info instead Dec 2 03:00:08 localhost systemd[1]: libpod-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: libpod-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Consumed 2.153s CPU time. Dec 2 03:00:08 localhost puppet-user[51811]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.36 seconds Dec 2 03:00:08 localhost podman[51644]: 2025-12-02 08:00:08.929718845 +0000 UTC m=+3.982786670 container died 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, container_name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Dec 2 03:00:08 localhost puppet-user[51840]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Dec 2 03:00:08 localhost podman[51669]: 2025-12-02 08:00:08.982921852 +0000 UTC m=+4.005336603 container died bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:00:09 localhost puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Dec 2 03:00:09 localhost puppet-user[51782]: Notice: Applied catalog in 0.46 seconds Dec 2 03:00:09 localhost puppet-user[51782]: Application: Dec 2 03:00:09 localhost puppet-user[51782]: Initial environment: production Dec 2 03:00:09 localhost puppet-user[51782]: Converged environment: production Dec 2 03:00:09 localhost puppet-user[51782]: Run mode: user Dec 2 03:00:09 localhost puppet-user[51782]: Changes: Dec 2 03:00:09 localhost puppet-user[51782]: Total: 4 Dec 2 03:00:09 localhost puppet-user[51782]: Events: Dec 2 03:00:09 localhost puppet-user[51782]: Success: 4 Dec 2 03:00:09 localhost puppet-user[51782]: Total: 4 Dec 2 03:00:09 localhost puppet-user[51782]: Resources: Dec 2 03:00:09 localhost puppet-user[51782]: Changed: 4 Dec 2 03:00:09 localhost puppet-user[51782]: Out of sync: 4 Dec 2 03:00:09 localhost puppet-user[51782]: Skipped: 8 Dec 2 03:00:09 localhost puppet-user[51782]: Total: 13 Dec 2 03:00:09 localhost puppet-user[51782]: Time: Dec 2 03:00:09 localhost puppet-user[51782]: File: 0.00 Dec 2 03:00:09 localhost puppet-user[51782]: Exec: 0.05 Dec 2 03:00:09 localhost puppet-user[51782]: Config retrieval: 0.14 Dec 2 03:00:09 localhost puppet-user[51782]: Augeas: 0.39 Dec 2 03:00:09 localhost puppet-user[51782]: Transaction evaluation: 0.45 Dec 2 03:00:09 localhost puppet-user[51782]: Catalog application: 0.46 Dec 2 03:00:09 localhost puppet-user[51782]: Last run: 1764662409 Dec 2 03:00:09 localhost puppet-user[51782]: Total: 0.46 Dec 2 03:00:09 localhost puppet-user[51782]: Version: Dec 2 03:00:09 localhost puppet-user[51782]: Config: 1764662408 Dec 2 03:00:09 localhost puppet-user[51782]: Puppet: 7.10.0 Dec 2 03:00:09 localhost podman[52322]: 2025-12-02 08:00:09.026684077 +0000 UTC m=+0.080878053 container cleanup bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Deactivated successfully. Dec 2 03:00:09 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Dec 2 03:00:09 localhost podman[52321]: 2025-12-02 08:00:09.063699121 +0000 UTC m=+0.124949917 container cleanup 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Deactivated successfully. Dec 2 03:00:09 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Dec 2 03:00:09 localhost puppet-user[51840]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Dec 2 03:00:09 localhost systemd[1]: libpod-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: libpod-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Consumed 2.740s CPU time. Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Dec 2 03:00:09 localhost podman[51625]: 2025-12-02 08:00:09.295922885 +0000 UTC m=+4.366917194 container died e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc.) Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Dec 2 03:00:09 localhost puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Dec 2 03:00:09 localhost puppet-user[51811]: Notice: Applied catalog in 0.29 seconds Dec 2 03:00:09 localhost puppet-user[51811]: Application: Dec 2 03:00:09 localhost puppet-user[51811]: Initial environment: production Dec 2 03:00:09 localhost puppet-user[51811]: Converged environment: production Dec 2 03:00:09 localhost puppet-user[51811]: Run mode: user Dec 2 03:00:09 localhost puppet-user[51811]: Changes: Dec 2 03:00:09 localhost puppet-user[51811]: Total: 43 Dec 2 03:00:09 localhost puppet-user[51811]: Events: Dec 2 03:00:09 localhost puppet-user[51811]: Success: 43 Dec 2 03:00:09 localhost puppet-user[51811]: Total: 43 Dec 2 03:00:09 localhost puppet-user[51811]: Resources: Dec 2 03:00:09 localhost puppet-user[51811]: Skipped: 14 Dec 2 03:00:09 localhost puppet-user[51811]: Changed: 38 Dec 2 03:00:09 localhost puppet-user[51811]: Out of sync: 38 Dec 2 03:00:09 localhost puppet-user[51811]: Total: 82 Dec 2 03:00:09 localhost puppet-user[51811]: Time: Dec 2 03:00:09 localhost puppet-user[51811]: Concat fragment: 0.00 Dec 2 03:00:09 localhost puppet-user[51811]: Concat file: 0.00 Dec 2 03:00:09 localhost puppet-user[51811]: File: 0.17 Dec 2 03:00:09 localhost puppet-user[51811]: Transaction evaluation: 0.27 Dec 2 03:00:09 localhost puppet-user[51811]: Catalog application: 0.29 Dec 2 03:00:09 localhost puppet-user[51811]: Config retrieval: 0.49 Dec 2 03:00:09 localhost puppet-user[51811]: Last run: 1764662409 Dec 2 03:00:09 localhost puppet-user[51811]: Total: 0.29 Dec 2 03:00:09 localhost puppet-user[51811]: Version: Dec 2 03:00:09 localhost puppet-user[51811]: Config: 1764662408 Dec 2 03:00:09 localhost puppet-user[51811]: Puppet: 7.10.0 Dec 2 03:00:09 localhost podman[52475]: 2025-12-02 08:00:09.400697599 +0000 UTC m=+0.094132137 container cleanup e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Deactivated successfully. Dec 2 03:00:09 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:00:09 localhost podman[52504]: 2025-12-02 08:00:09.433691783 +0000 UTC m=+0.077398179 container create 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:00:09 localhost podman[52482]: 2025-12-02 08:00:09.458568195 +0000 UTC m=+0.134931824 container create a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:09 localhost podman[52504]: 2025-12-02 08:00:09.385331591 +0000 UTC m=+0.029037997 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:00:09 localhost systemd[1]: Started libpod-conmon-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope. Dec 2 03:00:09 localhost systemd[1]: Started libpod-conmon-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope. Dec 2 03:00:09 localhost systemd[1]: Started libcrun container. Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost systemd[1]: Started libcrun container. Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost podman[52504]: 2025-12-02 08:00:09.511873794 +0000 UTC m=+0.155580180 container init 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git) Dec 2 03:00:09 localhost podman[52482]: 2025-12-02 08:00:09.425133278 +0000 UTC m=+0.101496897 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:00:09 localhost podman[52504]: 2025-12-02 08:00:09.52513894 +0000 UTC m=+0.168845326 container start 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, release=1761123044, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.) Dec 2 03:00:09 localhost podman[52504]: 2025-12-02 08:00:09.525872792 +0000 UTC m=+0.169579198 container attach 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, url=https://www.redhat.com, release=1761123044) Dec 2 03:00:09 localhost podman[52482]: 2025-12-02 08:00:09.605852936 +0000 UTC m=+0.282216575 container init a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, container_name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:00:09 localhost podman[52482]: 2025-12-02 08:00:09.616177824 +0000 UTC m=+0.292541463 container start a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_puppet_step1, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:00:09 localhost podman[52482]: 2025-12-02 08:00:09.616433302 +0000 UTC m=+0.292796941 container attach a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible) Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: libpod-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: libpod-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Consumed 2.865s CPU time. Dec 2 03:00:09 localhost podman[51663]: 2025-12-02 08:00:09.71127296 +0000 UTC m=+4.730441644 container died 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=container-puppet-collectd, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, vcs-type=git) Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost podman[52635]: 2025-12-02 08:00:09.798287224 +0000 UTC m=+0.078526712 container cleanup 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-collectd, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Deactivated successfully. Dec 2 03:00:09 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:00:09 localhost puppet-user[51840]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 1.32 seconds Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}5ba64817af7f9555281205611eb52d45214b5127a0e5ce894ff9b319c0723a16' Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Dec 2 03:00:10 localhost puppet-user[51840]: Warning: Empty environment setting 'TLS_PASSWORD' Dec 2 03:00:10 localhost puppet-user[51840]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8c1883a65300cc327d1cb9c34702b30b2083e07e3f42b734ab7685f1cc6449ef' Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[52333]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:10 localhost puppet-user[52333]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:10 localhost puppet-user[52333]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:10 localhost puppet-user[52333]: (file & line not available) Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Dec 2 03:00:10 localhost puppet-user[52333]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:10 localhost puppet-user[52333]: (file & line not available) Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Dec 2 03:00:10 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Dec 2 03:00:11 localhost puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.40 seconds Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:11 localhost puppet-user[52561]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:11 localhost puppet-user[52561]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:11 localhost puppet-user[52561]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:11 localhost puppet-user[52561]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[52636]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:11 localhost puppet-user[52636]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:11 localhost puppet-user[52636]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:11 localhost puppet-user[52636]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52636]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:11 localhost puppet-user[52636]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.25 seconds Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Dec 2 03:00:11 localhost puppet-user[52636]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.25 seconds Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52889]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Dec 2 03:00:11 localhost puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52891]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52893]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108 Dec 2 03:00:11 localhost puppet-user[52333]: Notice: Applied catalog in 0.50 seconds Dec 2 03:00:11 localhost puppet-user[52333]: Application: Dec 2 03:00:11 localhost puppet-user[52333]: Initial environment: production Dec 2 03:00:11 localhost puppet-user[52333]: Converged environment: production Dec 2 03:00:11 localhost puppet-user[52333]: Run mode: user Dec 2 03:00:11 localhost puppet-user[52333]: Changes: Dec 2 03:00:11 localhost puppet-user[52333]: Total: 31 Dec 2 03:00:11 localhost puppet-user[52333]: Events: Dec 2 03:00:11 localhost puppet-user[52333]: Success: 31 Dec 2 03:00:11 localhost puppet-user[52333]: Total: 31 Dec 2 03:00:11 localhost puppet-user[52333]: Resources: Dec 2 03:00:11 localhost puppet-user[52333]: Skipped: 22 Dec 2 03:00:11 localhost puppet-user[52333]: Changed: 31 Dec 2 03:00:11 localhost puppet-user[52333]: Out of sync: 31 Dec 2 03:00:11 localhost puppet-user[52333]: Total: 151 Dec 2 03:00:11 localhost puppet-user[52333]: Time: Dec 2 03:00:11 localhost puppet-user[52333]: Package: 0.02 Dec 2 03:00:11 localhost puppet-user[52333]: Ceilometer config: 0.39 Dec 2 03:00:11 localhost puppet-user[52333]: Config retrieval: 0.48 Dec 2 03:00:11 localhost puppet-user[52333]: Transaction evaluation: 0.49 Dec 2 03:00:11 localhost puppet-user[52333]: Catalog application: 0.50 Dec 2 03:00:11 localhost puppet-user[52333]: Last run: 1764662411 Dec 2 03:00:11 localhost puppet-user[52333]: Resources: 0.00 Dec 2 03:00:11 localhost puppet-user[52333]: Total: 0.50 Dec 2 03:00:11 localhost puppet-user[52333]: Version: Dec 2 03:00:11 localhost puppet-user[52333]: Config: 1764662410 Dec 2 03:00:11 localhost puppet-user[52333]: Puppet: 7.10.0 Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52896]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005541914.localdomain Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005541914.novalocal' to 'np0005541914.localdomain' Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52898]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Dec 2 03:00:11 localhost puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Dec 2 03:00:11 localhost puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52900]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Dec 2 03:00:11 localhost puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}301188e6f63ae57a8369f32fa91f83fd08a8e66eaf1f4f68c45e1e21c89fcefc' Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}3f62d179f65be7c16842a28abf994d6a58e30b2328fb95c74da2c0a9b9529a22' Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[52636]: Notice: Applied catalog in 0.12 seconds Dec 2 03:00:11 localhost puppet-user[52636]: Application: Dec 2 03:00:11 localhost puppet-user[52636]: Initial environment: production Dec 2 03:00:11 localhost puppet-user[52636]: Converged environment: production Dec 2 03:00:11 localhost puppet-user[52636]: Run mode: user Dec 2 03:00:11 localhost puppet-user[52636]: Changes: Dec 2 03:00:11 localhost puppet-user[52636]: Total: 3 Dec 2 03:00:11 localhost puppet-user[52636]: Events: Dec 2 03:00:11 localhost puppet-user[52636]: Success: 3 Dec 2 03:00:11 localhost puppet-user[52636]: Total: 3 Dec 2 03:00:11 localhost puppet-user[52636]: Resources: Dec 2 03:00:11 localhost puppet-user[52636]: Skipped: 11 Dec 2 03:00:11 localhost puppet-user[52636]: Changed: 3 Dec 2 03:00:11 localhost puppet-user[52636]: Out of sync: 3 Dec 2 03:00:11 localhost puppet-user[52636]: Total: 25 Dec 2 03:00:11 localhost puppet-user[52636]: Time: Dec 2 03:00:11 localhost puppet-user[52636]: Concat file: 0.00 Dec 2 03:00:11 localhost puppet-user[52636]: Concat fragment: 0.00 Dec 2 03:00:11 localhost puppet-user[52636]: File: 0.02 Dec 2 03:00:11 localhost puppet-user[52636]: Transaction evaluation: 0.12 Dec 2 03:00:11 localhost puppet-user[52636]: Catalog application: 0.12 Dec 2 03:00:11 localhost puppet-user[52636]: Config retrieval: 0.30 Dec 2 03:00:11 localhost puppet-user[52636]: Last run: 1764662411 Dec 2 03:00:11 localhost puppet-user[52636]: Total: 0.12 Dec 2 03:00:11 localhost puppet-user[52636]: Version: Dec 2 03:00:11 localhost puppet-user[52636]: Config: 1764662411 Dec 2 03:00:11 localhost puppet-user[52636]: Puppet: 7.10.0 Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52902]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Dec 2 03:00:11 localhost ovs-vsctl[52904]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Dec 2 03:00:11 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Dec 2 03:00:11 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52916]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52920]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52922]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:6e:12:41 Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52928]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52932]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[52934]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Dec 2 03:00:12 localhost puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Dec 2 03:00:12 localhost puppet-user[52561]: Notice: Applied catalog in 0.43 seconds Dec 2 03:00:12 localhost puppet-user[52561]: Application: Dec 2 03:00:12 localhost puppet-user[52561]: Initial environment: production Dec 2 03:00:12 localhost puppet-user[52561]: Converged environment: production Dec 2 03:00:12 localhost puppet-user[52561]: Run mode: user Dec 2 03:00:12 localhost puppet-user[52561]: Changes: Dec 2 03:00:12 localhost puppet-user[52561]: Total: 14 Dec 2 03:00:12 localhost puppet-user[52561]: Events: Dec 2 03:00:12 localhost puppet-user[52561]: Success: 14 Dec 2 03:00:12 localhost puppet-user[52561]: Total: 14 Dec 2 03:00:12 localhost puppet-user[52561]: Resources: Dec 2 03:00:12 localhost puppet-user[52561]: Skipped: 12 Dec 2 03:00:12 localhost puppet-user[52561]: Changed: 14 Dec 2 03:00:12 localhost puppet-user[52561]: Out of sync: 14 Dec 2 03:00:12 localhost puppet-user[52561]: Total: 29 Dec 2 03:00:12 localhost puppet-user[52561]: Time: Dec 2 03:00:12 localhost puppet-user[52561]: Exec: 0.02 Dec 2 03:00:12 localhost puppet-user[52561]: Config retrieval: 0.29 Dec 2 03:00:12 localhost puppet-user[52561]: Vs config: 0.37 Dec 2 03:00:12 localhost puppet-user[52561]: Transaction evaluation: 0.42 Dec 2 03:00:12 localhost puppet-user[52561]: Catalog application: 0.43 Dec 2 03:00:12 localhost puppet-user[52561]: Last run: 1764662412 Dec 2 03:00:12 localhost puppet-user[52561]: Total: 0.43 Dec 2 03:00:12 localhost puppet-user[52561]: Version: Dec 2 03:00:12 localhost puppet-user[52561]: Config: 1764662411 Dec 2 03:00:12 localhost puppet-user[52561]: Puppet: 7.10.0 Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Dec 2 03:00:12 localhost systemd[1]: libpod-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: libpod-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Consumed 3.163s CPU time. Dec 2 03:00:12 localhost systemd[1]: libpod-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: libpod-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Consumed 2.390s CPU time. Dec 2 03:00:12 localhost podman[52224]: 2025-12-02 08:00:12.238035763 +0000 UTC m=+3.678448275 container died acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, tcib_managed=true) Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Dec 2 03:00:12 localhost podman[52482]: 2025-12-02 08:00:12.290996512 +0000 UTC m=+2.967360161 container died a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, distribution-scope=public, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost systemd[1]: libpod-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: libpod-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Consumed 2.876s CPU time. Dec 2 03:00:12 localhost podman[52504]: 2025-12-02 08:00:12.608095517 +0000 UTC m=+3.251801943 container died 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:12 localhost systemd[1]: tmp-crun.4jjSGK.mount: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully. Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Dec 2 03:00:13 localhost podman[53001]: 2025-12-02 08:00:13.353692029 +0000 UTC m=+1.107345539 container cleanup acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-central-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:00:13 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Deactivated successfully. Dec 2 03:00:13 localhost podman[52607]: 2025-12-02 08:00:09.683277755 +0000 UTC m=+0.025958695 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:13 localhost podman[53002]: 2025-12-02 08:00:13.377963133 +0000 UTC m=+1.129790118 container cleanup a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=) Dec 2 03:00:13 localhost podman[53057]: 2025-12-02 08:00:13.396060473 +0000 UTC m=+0.777008800 container cleanup 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Deactivated successfully. Dec 2 03:00:13 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:00:13 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:13 localhost podman[53211]: 2025-12-02 08:00:13.636176083 +0000 UTC m=+0.068296087 container create 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, architecture=x86_64, build-date=2025-11-19T00:23:27Z, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server) Dec 2 03:00:13 localhost systemd[1]: Started libpod-conmon-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope. Dec 2 03:00:13 localhost systemd[1]: Started libcrun container. Dec 2 03:00:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:13 localhost podman[53211]: 2025-12-02 08:00:13.693508263 +0000 UTC m=+0.125628267 container init 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1761123044, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:23:27Z, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-neutron-server-container, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1) Dec 2 03:00:13 localhost podman[53211]: 2025-12-02 08:00:13.600722715 +0000 UTC m=+0.032842719 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:13 localhost podman[53211]: 2025-12-02 08:00:13.702836121 +0000 UTC m=+0.134956155 container start 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, vendor=Red Hat, Inc., architecture=x86_64, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com) Dec 2 03:00:13 localhost podman[53211]: 2025-12-02 08:00:13.703077238 +0000 UTC m=+0.135197262 container attach 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Dec 2 03:00:13 localhost systemd[1]: tmp-crun.kOmGFi.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:13 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Dec 2 03:00:14 localhost puppet-user[51840]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4' Dec 2 03:00:14 localhost puppet-user[51840]: Notice: Applied catalog in 4.65 seconds Dec 2 03:00:14 localhost puppet-user[51840]: Application: Dec 2 03:00:14 localhost puppet-user[51840]: Initial environment: production Dec 2 03:00:14 localhost puppet-user[51840]: Converged environment: production Dec 2 03:00:14 localhost puppet-user[51840]: Run mode: user Dec 2 03:00:14 localhost puppet-user[51840]: Changes: Dec 2 03:00:14 localhost puppet-user[51840]: Total: 183 Dec 2 03:00:14 localhost puppet-user[51840]: Events: Dec 2 03:00:14 localhost puppet-user[51840]: Success: 183 Dec 2 03:00:14 localhost puppet-user[51840]: Total: 183 Dec 2 03:00:14 localhost puppet-user[51840]: Resources: Dec 2 03:00:14 localhost puppet-user[51840]: Changed: 183 Dec 2 03:00:14 localhost puppet-user[51840]: Out of sync: 183 Dec 2 03:00:14 localhost puppet-user[51840]: Skipped: 57 Dec 2 03:00:14 localhost puppet-user[51840]: Total: 487 Dec 2 03:00:14 localhost puppet-user[51840]: Time: Dec 2 03:00:14 localhost puppet-user[51840]: Concat file: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: Concat fragment: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: Anchor: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: File line: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: Virtlogd config: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: Virtstoraged config: 0.01 Dec 2 03:00:14 localhost puppet-user[51840]: Virtqemud config: 0.01 Dec 2 03:00:14 localhost puppet-user[51840]: Package: 0.02 Dec 2 03:00:14 localhost puppet-user[51840]: Virtsecretd config: 0.02 Dec 2 03:00:14 localhost puppet-user[51840]: Virtproxyd config: 0.03 Dec 2 03:00:14 localhost puppet-user[51840]: File: 0.03 Dec 2 03:00:14 localhost puppet-user[51840]: Exec: 0.04 Dec 2 03:00:14 localhost puppet-user[51840]: Virtnodedevd config: 0.05 Dec 2 03:00:14 localhost puppet-user[51840]: Augeas: 1.14 Dec 2 03:00:14 localhost puppet-user[51840]: Config retrieval: 1.56 Dec 2 03:00:14 localhost puppet-user[51840]: Last run: 1764662414 Dec 2 03:00:14 localhost puppet-user[51840]: Nova config: 3.10 Dec 2 03:00:14 localhost puppet-user[51840]: Transaction evaluation: 4.64 Dec 2 03:00:14 localhost puppet-user[51840]: Catalog application: 4.65 Dec 2 03:00:14 localhost puppet-user[51840]: Resources: 0.00 Dec 2 03:00:14 localhost puppet-user[51840]: Total: 4.65 Dec 2 03:00:14 localhost puppet-user[51840]: Version: Dec 2 03:00:14 localhost puppet-user[51840]: Config: 1764662408 Dec 2 03:00:14 localhost puppet-user[51840]: Puppet: 7.10.0 Dec 2 03:00:15 localhost puppet-user[53241]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Dec 2 03:00:15 localhost puppet-user[53241]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:15 localhost puppet-user[53241]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:15 localhost puppet-user[53241]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:15 localhost puppet-user[53241]: (file & line not available) Dec 2 03:00:15 localhost puppet-user[53241]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:15 localhost puppet-user[53241]: (file & line not available) Dec 2 03:00:15 localhost systemd[1]: libpod-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Deactivated successfully. Dec 2 03:00:15 localhost systemd[1]: libpod-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Consumed 8.665s CPU time. Dec 2 03:00:15 localhost puppet-user[53241]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Dec 2 03:00:15 localhost podman[53353]: 2025-12-02 08:00:15.641337593 +0000 UTC m=+0.034603014 container died 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=container-puppet-nova_libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:00:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:15 localhost systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully. Dec 2 03:00:15 localhost podman[53353]: 2025-12-02 08:00:15.761696622 +0000 UTC m=+0.154962063 container cleanup 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-nova-libvirt-container) Dec 2 03:00:15 localhost systemd[1]: libpod-conmon-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Deactivated successfully. Dec 2 03:00:15 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:00:16 localhost puppet-user[53241]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.66 seconds Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 2 03:00:16 localhost sshd[53388]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:16 localhost puppet-user[53241]: Notice: Applied catalog in 0.47 seconds Dec 2 03:00:16 localhost puppet-user[53241]: Application: Dec 2 03:00:16 localhost puppet-user[53241]: Initial environment: production Dec 2 03:00:16 localhost puppet-user[53241]: Converged environment: production Dec 2 03:00:16 localhost puppet-user[53241]: Run mode: user Dec 2 03:00:16 localhost puppet-user[53241]: Changes: Dec 2 03:00:16 localhost puppet-user[53241]: Total: 33 Dec 2 03:00:16 localhost puppet-user[53241]: Events: Dec 2 03:00:16 localhost puppet-user[53241]: Success: 33 Dec 2 03:00:16 localhost puppet-user[53241]: Total: 33 Dec 2 03:00:16 localhost puppet-user[53241]: Resources: Dec 2 03:00:16 localhost puppet-user[53241]: Skipped: 21 Dec 2 03:00:16 localhost puppet-user[53241]: Changed: 33 Dec 2 03:00:16 localhost puppet-user[53241]: Out of sync: 33 Dec 2 03:00:16 localhost puppet-user[53241]: Total: 155 Dec 2 03:00:16 localhost puppet-user[53241]: Time: Dec 2 03:00:16 localhost puppet-user[53241]: Resources: 0.00 Dec 2 03:00:16 localhost puppet-user[53241]: Ovn metadata agent config: 0.02 Dec 2 03:00:16 localhost puppet-user[53241]: Neutron config: 0.39 Dec 2 03:00:16 localhost puppet-user[53241]: Transaction evaluation: 0.46 Dec 2 03:00:16 localhost puppet-user[53241]: Catalog application: 0.47 Dec 2 03:00:16 localhost puppet-user[53241]: Config retrieval: 0.73 Dec 2 03:00:16 localhost puppet-user[53241]: Last run: 1764662416 Dec 2 03:00:16 localhost puppet-user[53241]: Total: 0.47 Dec 2 03:00:16 localhost puppet-user[53241]: Version: Dec 2 03:00:16 localhost puppet-user[53241]: Config: 1764662415 Dec 2 03:00:16 localhost puppet-user[53241]: Puppet: 7.10.0 Dec 2 03:00:17 localhost systemd[1]: libpod-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: libpod-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Consumed 3.534s CPU time. Dec 2 03:00:17 localhost podman[53211]: 2025-12-02 08:00:17.246419943 +0000 UTC m=+3.678539967 container died 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:23:27Z, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:00:17 localhost systemd[1]: tmp-crun.3F0vT0.mount: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully. Dec 2 03:00:17 localhost podman[53424]: 2025-12-02 08:00:17.396634472 +0000 UTC m=+0.139869492 container cleanup 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-neutron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-neutron-server, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 2 03:00:17 localhost systemd[1]: libpod-conmon-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Deactivated successfully. Dec 2 03:00:17 localhost python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:18 localhost python3[53477]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:19 localhost python3[53509]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:00:19 localhost python3[53559]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:20 localhost python3[53602]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662419.4198186-83439-90547271906757/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:20 localhost python3[53664]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:20 localhost python3[53707]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662420.2006798-83439-201456353606193/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:21 localhost python3[53769]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:21 localhost python3[53812]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662421.1028492-83526-103879694803810/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:22 localhost python3[53874]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:22 localhost python3[53917]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662421.9341533-83564-244804915133567/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:23 localhost python3[53947]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:23 localhost systemd[1]: Reloading. Dec 2 03:00:23 localhost systemd-rc-local-generator[53969]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:23 localhost systemd-sysv-generator[53973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:23 localhost systemd[1]: Reloading. Dec 2 03:00:23 localhost systemd-sysv-generator[54012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:23 localhost systemd-rc-local-generator[54009]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:23 localhost systemd[1]: Starting TripleO Container Shutdown... Dec 2 03:00:23 localhost systemd[1]: Finished TripleO Container Shutdown. Dec 2 03:00:24 localhost python3[54070]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:24 localhost python3[54113]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662423.8682857-83617-259391825803792/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:25 localhost python3[54175]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:25 localhost python3[54218]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662424.7594256-83644-131110726709512/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:25 localhost python3[54248]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:25 localhost systemd[1]: Reloading. Dec 2 03:00:26 localhost systemd-rc-local-generator[54271]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:26 localhost systemd-sysv-generator[54274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:26 localhost systemd[1]: Reloading. Dec 2 03:00:26 localhost systemd-rc-local-generator[54310]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:26 localhost systemd-sysv-generator[54315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:26 localhost systemd[1]: Starting Create netns directory... Dec 2 03:00:26 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:00:26 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:00:26 localhost systemd[1]: Finished Create netns directory. Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: b56066700c0c3079c35d037ee6698236 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: d89676d7ec0a7c13ef9894fdb26c6e3a Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 96606bb2d91ec59ed336cbd6010f1851 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 885e9e62222ac12bce952717b40ccfc4 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 885e9e62222ac12bce952717b40ccfc4 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 6b6de39672ef4d892f2e8f81f38c430b Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: d89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:27 localhost python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 51230b537c6b56095225b7a0a6b952d0 Dec 2 03:00:28 localhost python3[54397]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:00:28 localhost podman[54434]: 2025-12-02 08:00:28.956520266 +0000 UTC m=+0.093469778 container create b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr_init_logs, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 03:00:29 localhost systemd[1]: Started libpod-conmon-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope. Dec 2 03:00:29 localhost podman[54434]: 2025-12-02 08:00:28.910223895 +0000 UTC m=+0.047173417 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:29 localhost systemd[1]: Started libcrun container. Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:29 localhost podman[54434]: 2025-12-02 08:00:29.042925873 +0000 UTC m=+0.179875365 container init b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 2 03:00:29 localhost podman[54434]: 2025-12-02 08:00:29.053847988 +0000 UTC m=+0.190797480 container start b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr_init_logs, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:29 localhost podman[54434]: 2025-12-02 08:00:29.054142627 +0000 UTC m=+0.191092249 container attach b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vcs-type=git, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:29 localhost systemd[1]: libpod-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope: Deactivated successfully. Dec 2 03:00:29 localhost podman[54434]: 2025-12-02 08:00:29.060864357 +0000 UTC m=+0.197813889 container died b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=metrics_qdr_init_logs, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:00:29 localhost podman[54453]: 2025-12-02 08:00:29.157323464 +0000 UTC m=+0.082148591 container cleanup b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step1, container_name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z) Dec 2 03:00:29 localhost systemd[1]: libpod-conmon-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope: Deactivated successfully. Dec 2 03:00:29 localhost python3[54397]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Dec 2 03:00:29 localhost podman[54525]: 2025-12-02 08:00:29.637096509 +0000 UTC m=+0.083706577 container create 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:29 localhost systemd[1]: Started libpod-conmon-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope. Dec 2 03:00:29 localhost podman[54525]: 2025-12-02 08:00:29.592053046 +0000 UTC m=+0.038663134 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:29 localhost systemd[1]: Started libcrun container. Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:00:29 localhost podman[54525]: 2025-12-02 08:00:29.736184464 +0000 UTC m=+0.182794562 container init 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:00:29 localhost podman[54525]: 2025-12-02 08:00:29.776249458 +0000 UTC m=+0.222859556 container start 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:00:29 localhost python3[54397]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b56066700c0c3079c35d037ee6698236 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:29 localhost podman[54547]: 2025-12-02 08:00:29.880086465 +0000 UTC m=+0.092584001 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:29 localhost systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully. Dec 2 03:00:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:30 localhost podman[54547]: 2025-12-02 08:00:30.123902935 +0000 UTC m=+0.336400521 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:00:30 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:00:30 localhost python3[54619]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:30 localhost python3[54635]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:00:30 localhost sshd[54668]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:00:31 localhost python3[54698]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662430.779021-83879-85923993452796/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:31 localhost python3[54714]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:00:31 localhost systemd[1]: Reloading. Dec 2 03:00:31 localhost systemd-rc-local-generator[54738]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:31 localhost systemd-sysv-generator[54743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:32 localhost python3[54766]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:32 localhost systemd[1]: Reloading. Dec 2 03:00:32 localhost systemd-sysv-generator[54795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:32 localhost systemd-rc-local-generator[54791]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:32 localhost systemd[1]: Starting metrics_qdr container... Dec 2 03:00:33 localhost systemd[1]: Started metrics_qdr container. Dec 2 03:00:33 localhost python3[54846]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:34 localhost python3[54967]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005541914 step=1 update_config_hash_only=False Dec 2 03:00:35 localhost python3[54983]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:35 localhost python3[54999]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:00:56 localhost sshd[55000]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:01:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:01:01 localhost podman[55002]: 2025-12-02 08:01:01.057546085 +0000 UTC m=+0.066874935 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:01:01 localhost podman[55002]: 2025-12-02 08:01:01.233229474 +0000 UTC m=+0.242558294 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step1, release=1761123044) Dec 2 03:01:01 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:01:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:01:32 localhost systemd[1]: tmp-crun.A5RDPF.mount: Deactivated successfully. Dec 2 03:01:32 localhost podman[55118]: 2025-12-02 08:01:32.09164362 +0000 UTC m=+0.099711945 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:01:32 localhost podman[55118]: 2025-12-02 08:01:32.291847303 +0000 UTC m=+0.299915638 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:01:32 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:01:32 localhost sshd[55147]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:02:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:02:03 localhost systemd[1]: tmp-crun.auX6cG.mount: Deactivated successfully. Dec 2 03:02:03 localhost podman[55149]: 2025-12-02 08:02:03.085884915 +0000 UTC m=+0.086581022 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:02:03 localhost podman[55149]: 2025-12-02 08:02:03.282480088 +0000 UTC m=+0.283176205 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=) Dec 2 03:02:03 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:02:14 localhost sshd[55257]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:02:31 localhost sshd[55259]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:02:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:02:34 localhost podman[55261]: 2025-12-02 08:02:34.077372809 +0000 UTC m=+0.084588810 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container) Dec 2 03:02:34 localhost podman[55261]: 2025-12-02 08:02:34.260969195 +0000 UTC m=+0.268185216 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:02:34 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:02:44 localhost sshd[55289]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:03:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:03:05 localhost systemd[1]: tmp-crun.XYiGHe.mount: Deactivated successfully. Dec 2 03:03:05 localhost podman[55291]: 2025-12-02 08:03:05.060005467 +0000 UTC m=+0.067909264 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:03:05 localhost podman[55291]: 2025-12-02 08:03:05.243313065 +0000 UTC m=+0.251216932 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044) Dec 2 03:03:05 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:03:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:03:36 localhost systemd[1]: tmp-crun.lJy87M.mount: Deactivated successfully. Dec 2 03:03:36 localhost podman[55396]: 2025-12-02 08:03:36.071561256 +0000 UTC m=+0.078605167 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 03:03:36 localhost podman[55396]: 2025-12-02 08:03:36.295216025 +0000 UTC m=+0.302259936 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, vcs-type=git) Dec 2 03:03:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:03:40 localhost sshd[55427]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:03:58 localhost sshd[55429]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:04:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:04:07 localhost podman[55431]: 2025-12-02 08:04:07.136158816 +0000 UTC m=+0.138389271 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=metrics_qdr, io.buildah.version=1.41.4) Dec 2 03:04:07 localhost podman[55431]: 2025-12-02 08:04:07.351126771 +0000 UTC m=+0.353357246 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Dec 2 03:04:07 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:04:38 localhost podman[55536]: 2025-12-02 08:04:38.056589928 +0000 UTC m=+0.064053277 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:04:38 localhost podman[55536]: 2025-12-02 08:04:38.250913016 +0000 UTC m=+0.258376355 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, release=1761123044, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1) Dec 2 03:04:38 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:04:39 localhost sshd[55564]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:05:04 localhost sshd[55566]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:05:09 localhost podman[55568]: 2025-12-02 08:05:09.069021553 +0000 UTC m=+0.071895807 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:05:09 localhost podman[55568]: 2025-12-02 08:05:09.301946189 +0000 UTC m=+0.304820413 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc.) Dec 2 03:05:09 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:05:27 localhost ceph-osd[32707]: osd.4 pg_epoch: 21 pg[2.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [4,5,3] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:28 localhost sshd[55673]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:05:28 localhost ceph-osd[32707]: osd.4 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [4,5,3] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:31 localhost ceph-osd[32707]: osd.4 pg_epoch: 23 pg[3.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [5,4,0] r=1 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:32 localhost ceph-osd[32707]: osd.4 pg_epoch: 25 pg[4.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [3,4,5] r=1 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:34 localhost ceph-osd[32707]: osd.4 pg_epoch: 27 pg[5.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [2,3,4] r=2 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:05:40 localhost podman[55675]: 2025-12-02 08:05:40.08923907 +0000 UTC m=+0.089076253 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 2 03:05:40 localhost podman[55675]: 2025-12-02 08:05:40.306138817 +0000 UTC m=+0.305975960 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:05:40 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:05:41 localhost ceph-osd[32707]: osd.4 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.269608498s) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 1117.107177734s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:41 localhost ceph-osd[32707]: osd.4 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.772205353s) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active pruub 1119.609741211s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:41 localhost ceph-osd[32707]: osd.4 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.269608498s) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown pruub 1117.107177734s@ mbc={}] state: transitioning to Primary Dec 2 03:05:41 localhost ceph-osd[32707]: osd.4 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.769854546s) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.609741211s@ mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.19( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.16( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.18( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.17( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.14( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.15( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.12( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.13( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.10( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.11( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.7( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.6( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.3( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.2( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.4( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.5( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.9( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.8( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:42 localhost ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:43 localhost ceph-osd[32707]: osd.4 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.478153229s) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 1123.410644531s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:43 localhost ceph-osd[32707]: osd.4 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.107388496s) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active pruub 1121.039916992s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:43 localhost ceph-osd[32707]: osd.4 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.476300240s) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.410644531s@ mbc={}] state: transitioning to Stray Dec 2 03:05:43 localhost ceph-osd[32707]: osd.4 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.105083466s) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.039916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.0 deep-scrub starts Dec 2 03:05:44 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.0 deep-scrub ok Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.18( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.18( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.19( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.19( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.3( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.2( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.2( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.3( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.5( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.4( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.5( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.7( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.6( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.6( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.7( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.4( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.8( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.9( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.9( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.8( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.17( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.17( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.16( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.14( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.16( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.15( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.14( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.13( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.13( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.12( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.12( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.10( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.15( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.11( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.10( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.11( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:46 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1b scrub starts Dec 2 03:05:46 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1b scrub ok Dec 2 03:05:48 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts Dec 2 03:05:48 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643611908s) [0,1,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775737762s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774971962s) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045288086s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643533707s) [0,1,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774971962s) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045288086s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775689125s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643563271s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643545151s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643477440s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643494606s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643445015s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643472672s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774338722s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045654297s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643100739s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642590523s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643815041s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642649651s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643723488s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642518997s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772107124s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043823242s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642672539s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774074554s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045654297s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642640114s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772087097s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043823242s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642460823s) [1,5,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642405510s) [1,5,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773132324s) [0,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045166016s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642447472s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773102760s) [0,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045166016s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642099380s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642024994s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767205238s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039550781s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767205238s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.039550781s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642556190s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771138191s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043579102s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771058083s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043579102s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642556190s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914916992s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642447472s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914428711s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641952515s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766492844s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039184570s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641885757s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641731262s) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766381264s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.039184570s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641731262s) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914672852s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772113800s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045166016s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772085190s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045166016s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641571999s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641571999s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914672852s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641440392s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771500587s) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044677734s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771500587s) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.044677734s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641579628s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641543388s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641202927s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766919136s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.040405273s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766897202s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.040405273s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770991325s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044555664s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770950317s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044555664s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641399384s) [0,1,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640690804s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641380310s) [0,1,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640630722s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772137642s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764846802s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.038696289s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764846802s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.038696289s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772120476s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640861511s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640952110s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915283203s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640828133s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640813828s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915283203s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639932632s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639659882s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775296211s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775296211s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.050903320s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775214195s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051025391s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639222145s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639042854s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775214195s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.051025391s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.638940811s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639085770s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639106750s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915283203s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639085770s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915283203s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.780861855s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.780861855s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.057128906s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.637595177s) [5,3,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774245262s) [2,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.637558937s) [5,3,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774185181s) [2,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774166107s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770162582s) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046997070s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774127007s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770162582s) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046997070s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630984306s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.908081055s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773820877s) [2,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631194115s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.908081055s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636497498s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773624420s) [2,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636285782s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761549950s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.038940430s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630871773s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.908081055s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761529922s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.038940430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630474091s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.907958984s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630527496s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.908081055s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636249542s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630374908s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.907958984s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773282051s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051025391s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773192406s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051025391s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636229515s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768943787s) [2,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046752930s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769277573s) [1,3,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.047119141s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636951447s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769010544s) [1,3,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.047119141s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636927605s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768909454s) [2,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046752930s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767914772s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046630859s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772156715s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.1a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635618210s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634724617s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913574219s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772156715s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.050903320s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635580063s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634698868s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913574219s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634529114s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767914772s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046630859s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772084236s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772066116s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766921043s) [2,4,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634247780s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766852379s) [2,4,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771631241s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771612167s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634372711s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634899139s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771496773s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634829521s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634372711s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914062500s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771477699s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633814812s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913574219s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633731842s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913574219s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633767128s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633722305s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776829720s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.056884766s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776655197s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.056884766s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765070915s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045654297s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776633263s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634313583s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634293556s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776563644s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057128906s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,0,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632801056s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633387566s) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632761002s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633387566s) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914184570s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632960320s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632933617s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763193130s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044189453s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775778770s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763002396s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044189453s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765070915s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045654297s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775720596s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057006836s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775480270s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775454521s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057128906s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631974220s) [2,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632916451s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631913185s) [2,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.756907463s) [3,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039062500s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631725311s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632796288s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774639130s) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631699562s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.756800652s) [3,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.039062500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774639130s) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.057006836s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774272919s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.056884766s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774272919s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.056884766s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.757686615s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.040405273s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760393143s) [5,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043212891s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631577492s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760374069s) [5,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043212891s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774301529s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774095535s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057006836s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.757373810s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.040405273s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631469727s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630979538s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630958557s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630856514s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630836487s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773843765s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057250977s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773828506s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057250977s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762653351s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774485588s) [5,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058227539s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762653351s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046020508s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774389267s) [5,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058227539s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774335861s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058227539s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774293900s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058227539s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630635262s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773723602s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057861328s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773691177s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057861328s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773790359s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761653900s) [3,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630414963s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761591911s) [3,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629897118s) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629917145s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629857063s) [5,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629890442s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629816055s) [5,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629897118s) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914428711s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773769379s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760544777s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045532227s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773128510s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760544777s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045532227s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772986412s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629462242s) [2,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629435539s) [2,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772093773s) [5,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057373047s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772067070s) [5,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057373047s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772439003s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057861328s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772418022s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057861328s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760279655s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760244370s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760226250s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760129929s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629001617s) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914794922s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772271156s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629001617s) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914794922s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772271156s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.058105469s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628746986s) [5,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.758749962s) [5,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044799805s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628642082s) [5,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628845215s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.758734703s) [5,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044799805s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771629333s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771441460s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771579742s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628723145s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771390915s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628582954s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628530502s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628270149s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771138191s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.626810074s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628399849s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.626704216s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627779961s) [3,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914794922s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770967484s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627736092s) [3,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914794922s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770787239s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771070480s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.6( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.8( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,3,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 37 pg[6.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,4,2] r=1 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,3,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,3,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,3,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,0,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1,5,3] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:51 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub starts Dec 2 03:05:52 localhost ceph-osd[31770]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1,5,3] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:55 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.1c scrub starts Dec 2 03:05:55 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.1c scrub ok Dec 2 03:05:55 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub starts Dec 2 03:05:56 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.13 scrub starts Dec 2 03:05:56 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub starts Dec 2 03:05:56 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.13 scrub ok Dec 2 03:05:57 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.10 scrub starts Dec 2 03:05:57 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.10 scrub ok Dec 2 03:05:58 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.d scrub starts Dec 2 03:05:58 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.d scrub ok Dec 2 03:05:59 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.2 scrub starts Dec 2 03:05:59 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.2 scrub ok Dec 2 03:06:00 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub starts Dec 2 03:06:02 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1c scrub starts Dec 2 03:06:02 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1c scrub ok Dec 2 03:06:04 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.3 scrub starts Dec 2 03:06:04 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.3 scrub ok Dec 2 03:06:08 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts Dec 2 03:06:08 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok Dec 2 03:06:09 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts Dec 2 03:06:09 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok Dec 2 03:06:10 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.9 scrub starts Dec 2 03:06:10 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.9 scrub ok Dec 2 03:06:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:06:11 localhost systemd[1]: tmp-crun.rVPRaY.mount: Deactivated successfully. Dec 2 03:06:11 localhost podman[55751]: 2025-12-02 08:06:11.057818244 +0000 UTC m=+0.062168358 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true) Dec 2 03:06:11 localhost podman[55751]: 2025-12-02 08:06:11.228879956 +0000 UTC m=+0.233230080 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, release=1761123044, container_name=metrics_qdr, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 2 03:06:11 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:06:11 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.11 scrub starts Dec 2 03:06:11 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.11 scrub ok Dec 2 03:06:13 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.c scrub starts Dec 2 03:06:13 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.c scrub ok Dec 2 03:06:14 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.16 scrub starts Dec 2 03:06:14 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.16 scrub ok Dec 2 03:06:15 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.f scrub starts Dec 2 03:06:15 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.18 scrub starts Dec 2 03:06:15 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.18 scrub ok Dec 2 03:06:15 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.f scrub ok Dec 2 03:06:16 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.c scrub starts Dec 2 03:06:16 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.c scrub ok Dec 2 03:06:19 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.d scrub starts Dec 2 03:06:19 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.d scrub ok Dec 2 03:06:20 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts Dec 2 03:06:20 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok Dec 2 03:06:21 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts Dec 2 03:06:21 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok Dec 2 03:06:22 localhost sshd[55780]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:06:24 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.11 scrub starts Dec 2 03:06:24 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.11 scrub ok Dec 2 03:06:25 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.14 scrub starts Dec 2 03:06:25 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.14 scrub ok Dec 2 03:06:26 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts Dec 2 03:06:26 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok Dec 2 03:06:27 localhost python3[55797]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:28 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.17 scrub starts Dec 2 03:06:28 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.17 scrub ok Dec 2 03:06:28 localhost python3[55813]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:29 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub starts Dec 2 03:06:30 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.a scrub starts Dec 2 03:06:30 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.a scrub ok Dec 2 03:06:30 localhost python3[55829]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:31 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.2 scrub starts Dec 2 03:06:31 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.2 scrub ok Dec 2 03:06:32 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.8 scrub starts Dec 2 03:06:32 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.8 scrub ok Dec 2 03:06:33 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts Dec 2 03:06:33 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok Dec 2 03:06:33 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.5 scrub starts Dec 2 03:06:33 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.5 scrub ok Dec 2 03:06:34 localhost python3[55877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:34 localhost python3[55920]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662793.7121615-91322-6419718888736/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:34 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.7 scrub starts Dec 2 03:06:34 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.7 scrub ok Dec 2 03:06:35 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub starts Dec 2 03:06:35 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub ok Dec 2 03:06:37 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub starts Dec 2 03:06:37 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub ok Dec 2 03:06:38 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.e scrub starts Dec 2 03:06:38 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.e scrub ok Dec 2 03:06:39 localhost python3[55982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:39 localhost python3[56025]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662798.8278913-91322-73826038523956/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=32e95cb48a0c881d4099e3645e940da5c77bc88c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:40 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1b scrub starts Dec 2 03:06:40 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1b scrub ok Dec 2 03:06:41 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.18 scrub starts Dec 2 03:06:41 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.18 scrub ok Dec 2 03:06:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:06:42 localhost systemd[1]: tmp-crun.1iQotj.mount: Deactivated successfully. Dec 2 03:06:42 localhost podman[56040]: 2025-12-02 08:06:42.080009591 +0000 UTC m=+0.087045189 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:06:42 localhost podman[56040]: 2025-12-02 08:06:42.29776343 +0000 UTC m=+0.304798988 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:06:42 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:06:42 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.7 scrub starts Dec 2 03:06:42 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.7 scrub ok Dec 2 03:06:43 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1a scrub starts Dec 2 03:06:43 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1a scrub ok Dec 2 03:06:44 localhost python3[56117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:44 localhost python3[56160]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662803.8778832-91322-249950130745560/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=ed42d7e7572ec51630a216299b8e7374862502cf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:45 localhost ceph-osd[31770]: osd.1 pg_epoch: 45 pg[7.0( v 42'39 (0'0,42'39] local-lis/les=39/40 n=22 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.660870552s) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 42'38 active pruub 1185.561279297s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:45 localhost ceph-osd[31770]: osd.1 pg_epoch: 45 pg[7.0( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.660870552s) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 0'0 unknown pruub 1185.561279297s@ mbc={}] state: transitioning to Primary Dec 2 03:06:45 localhost ceph-osd[32707]: osd.4 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.961356163s) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 1178.329833984s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:45 localhost ceph-osd[32707]: osd.4 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.957962990s) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1178.329833984s@ mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.d( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.c( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.9( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.b( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.6( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.7( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.19( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.18( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.7( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.6( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.4( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.a( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.8( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.2( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.f( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.3( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.e( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.5( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.3( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.2( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.5( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.4( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.8( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.9( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.14( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.17( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.15( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.16( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.10( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.11( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.13( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.12( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.0( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.0 scrub starts Dec 2 03:06:46 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.0 scrub ok Dec 2 03:06:47 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts Dec 2 03:06:47 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok Dec 2 03:06:48 localhost python3[56222]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:49 localhost python3[56267]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662808.4753876-91678-138402771932315/source _original_basename=tmp3eiuoydn follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.b( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.772628784s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.978027344s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.772575378s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.978027344s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771993637s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771928787s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771564484s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977172852s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771483421s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977172852s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770465851s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976928711s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769610405s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976318359s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770363808s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976928711s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769525528s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976318359s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769467354s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976684570s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770059586s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769384384s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976684570s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770017624s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770520210s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.978149414s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770452499s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.978149414s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780364037s) [5,1,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780282974s) [5,1,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779023170s) [0,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443237305s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778983116s) [0,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443237305s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780378342s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444580078s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778674126s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780378342s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444580078s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780510902s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780442238s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778202057s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442626953s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778571129s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778161049s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442626953s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779086113s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443603516s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779058456s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443603516s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778654099s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443481445s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778604507s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443481445s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779118538s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779795647s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444702148s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779795647s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444702148s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778046608s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778909683s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778006554s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778153419s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443237305s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778153419s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.443237305s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778456688s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776854515s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442260742s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779340744s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776854515s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.442260742s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779304504s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779032707s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444580078s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779032707s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444580078s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776348114s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442138672s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776301384s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442138672s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777090073s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442871094s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777280807s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443115234s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777069092s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442871094s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777251244s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443115234s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776212692s) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442138672s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776212692s) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.442138672s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778293610s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444335938s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778268814s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444335938s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776980400s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443115234s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778181076s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444458008s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778771400s) [0,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444946289s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778391838s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778753281s) [0,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444946289s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778181076s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444458008s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776881218s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443115234s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778571129s) [5,0,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777273178s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443725586s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777256966s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443725586s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778537750s) [5,0,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777244568s) [3,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776783943s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443481445s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777533531s) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444091797s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776764870s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443481445s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774728775s) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.441406250s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776144981s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442871094s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774652481s) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.441406250s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776121140s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442871094s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777493477s) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444091797s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777163506s) [3,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost sshd[56282]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:06:50 localhost python3[56331]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1e( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1b( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,1,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.17( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,0,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.18( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.16( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.13( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.15( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.19( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.2( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.8( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.e( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.f( v 42'39 lc 42'1 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.b( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.3( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.5( v 42'39 lc 42'11 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.3( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.5( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.7( v 42'39 lc 42'21 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.7( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.1a( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.a( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.d( v 42'39 lc 42'13 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost python3[56374]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662809.9636912-91763-237988596539909/source _original_basename=tmpk44yopjf follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:51 localhost python3[56404]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Dec 2 03:06:51 localhost python3[56422]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709970474s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976318359s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709904671s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976318359s@ mbc={}] state: transitioning to Stray Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710413933s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976806641s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710338593s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976806641s@ mbc={}] state: transitioning to Stray Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710016251s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976684570s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709935188s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976684570s@ mbc={}] state: transitioning to Stray Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710141182s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710083961s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state: transitioning to Stray Dec 2 03:06:53 localhost ansible-async_wrapper.py[56594]: Invoked with 582142990844 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662812.470347-91852-144939483418995/AnsiballZ_command.py _ Dec 2 03:06:53 localhost ansible-async_wrapper.py[56597]: Starting module and watcher Dec 2 03:06:53 localhost ansible-async_wrapper.py[56597]: Start watching 56598 (3600) Dec 2 03:06:53 localhost ansible-async_wrapper.py[56598]: Start module (56598) Dec 2 03:06:53 localhost ansible-async_wrapper.py[56594]: Return async_wrapper task started. Dec 2 03:06:53 localhost python3[56618]: ansible-ansible.legacy.async_status Invoked with jid=582142990844.56594 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:06:56 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.4 scrub starts Dec 2 03:06:56 localhost puppet-user[56617]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:06:56 localhost puppet-user[56617]: (file: /etc/puppet/hiera.yaml) Dec 2 03:06:56 localhost puppet-user[56617]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:06:56 localhost puppet-user[56617]: (file & line not available) Dec 2 03:06:56 localhost puppet-user[56617]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:06:56 localhost puppet-user[56617]: (file & line not available) Dec 2 03:06:56 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.4 scrub ok Dec 2 03:06:56 localhost puppet-user[56617]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:06:56 localhost puppet-user[56617]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:06:56 localhost puppet-user[56617]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.12 seconds Dec 2 03:06:56 localhost puppet-user[56617]: Notice: Applied catalog in 0.03 seconds Dec 2 03:06:56 localhost puppet-user[56617]: Application: Dec 2 03:06:56 localhost puppet-user[56617]: Initial environment: production Dec 2 03:06:56 localhost puppet-user[56617]: Converged environment: production Dec 2 03:06:56 localhost puppet-user[56617]: Run mode: user Dec 2 03:06:56 localhost puppet-user[56617]: Changes: Dec 2 03:06:56 localhost puppet-user[56617]: Events: Dec 2 03:06:56 localhost puppet-user[56617]: Resources: Dec 2 03:06:56 localhost puppet-user[56617]: Total: 10 Dec 2 03:06:56 localhost puppet-user[56617]: Time: Dec 2 03:06:56 localhost puppet-user[56617]: Schedule: 0.00 Dec 2 03:06:56 localhost puppet-user[56617]: File: 0.00 Dec 2 03:06:56 localhost puppet-user[56617]: Exec: 0.01 Dec 2 03:06:56 localhost puppet-user[56617]: Augeas: 0.01 Dec 2 03:06:56 localhost puppet-user[56617]: Transaction evaluation: 0.03 Dec 2 03:06:56 localhost puppet-user[56617]: Catalog application: 0.03 Dec 2 03:06:56 localhost puppet-user[56617]: Config retrieval: 0.20 Dec 2 03:06:56 localhost puppet-user[56617]: Last run: 1764662816 Dec 2 03:06:56 localhost puppet-user[56617]: Filebucket: 0.00 Dec 2 03:06:56 localhost puppet-user[56617]: Total: 0.04 Dec 2 03:06:56 localhost puppet-user[56617]: Version: Dec 2 03:06:56 localhost puppet-user[56617]: Config: 1764662816 Dec 2 03:06:56 localhost puppet-user[56617]: Puppet: 7.10.0 Dec 2 03:06:56 localhost ansible-async_wrapper.py[56598]: Module complete (56598) Dec 2 03:06:58 localhost ansible-async_wrapper.py[56597]: Done in kid B. Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.040720940s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.737792969s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.040622711s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.737792969s@ mbc={}] state: transitioning to Stray Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.033008575s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.730224609s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.032918930s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.730224609s@ mbc={}] state: transitioning to Stray Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.041465759s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.738403320s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.040790558s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.738403320s@ mbc={}] state: transitioning to Stray Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.032548904s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.730224609s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.032447815s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.730224609s@ mbc={}] state: transitioning to Stray Dec 2 03:06:59 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1 scrub starts Dec 2 03:06:59 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1 scrub ok Dec 2 03:07:00 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.15 scrub starts Dec 2 03:07:00 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.15 scrub ok Dec 2 03:07:01 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.8 scrub starts Dec 2 03:07:01 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.8 scrub ok Dec 2 03:07:01 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.10 scrub starts Dec 2 03:07:01 localhost ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619408607s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1198.976440430s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:01 localhost ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619290352s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.976440430s@ mbc={}] state: transitioning to Stray Dec 2 03:07:01 localhost ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=4 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619512558s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1198.977294922s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:01 localhost ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=4 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619264603s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.977294922s@ mbc={}] state: transitioning to Stray Dec 2 03:07:01 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.10 scrub ok Dec 2 03:07:02 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.d scrub starts Dec 2 03:07:02 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.d scrub ok Dec 2 03:07:02 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts Dec 2 03:07:02 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok Dec 2 03:07:03 localhost python3[56873]: ansible-ansible.legacy.async_status Invoked with jid=582142990844.56594 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:07:04 localhost python3[56889]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:07:04 localhost python3[56905]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:07:05 localhost python3[56955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:05 localhost python3[56973]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp1qy3p7o6 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:07:05 localhost python3[57003]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:06 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.19 scrub starts Dec 2 03:07:06 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.19 scrub ok Dec 2 03:07:07 localhost python3[57107]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:07:07 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.1 scrub starts Dec 2 03:07:07 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.1 scrub ok Dec 2 03:07:08 localhost python3[57126]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:09 localhost python3[57158]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:07:09 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub starts Dec 2 03:07:09 localhost python3[57208]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:09 localhost ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.940701485s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 active pruub 1206.738037109s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:09 localhost ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.946455002s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 active pruub 1206.744140625s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:09 localhost ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.946278572s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1206.744140625s@ mbc={}] state: transitioning to Stray Dec 2 03:07:09 localhost ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.940092087s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1206.738037109s@ mbc={}] state: transitioning to Stray Dec 2 03:07:09 localhost python3[57226]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:10 localhost python3[57288]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:10 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.9 scrub starts Dec 2 03:07:10 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.9 scrub ok Dec 2 03:07:10 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts Dec 2 03:07:10 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok Dec 2 03:07:10 localhost python3[57306]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:11 localhost python3[57368]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4209 writes, 19K keys, 4209 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4209 writes, 411 syncs, 10.24 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 810 writes, 2804 keys, 810 commit groups, 1.0 writes per commit group, ingest: 1.41 MB, 0.00 MB/s#012Interval WAL: 810 writes, 209 syncs, 3.88 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 03:07:11 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.5 scrub starts Dec 2 03:07:11 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.5 scrub ok Dec 2 03:07:11 localhost python3[57386]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:11 localhost ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.983466148s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1213.342407227s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:11 localhost ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.983395576s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.342407227s@ mbc={}] state: transitioning to Stray Dec 2 03:07:11 localhost ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.984255791s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1213.344238281s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:11 localhost ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.984129906s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.344238281s@ mbc={}] state: transitioning to Stray Dec 2 03:07:12 localhost python3[57448]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:12 localhost python3[57466]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:07:12 localhost ceph-osd[32707]: osd.4 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:12 localhost ceph-osd[32707]: osd.4 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:12 localhost systemd[1]: tmp-crun.KRKQ2q.mount: Deactivated successfully. Dec 2 03:07:12 localhost podman[57497]: 2025-12-02 08:07:12.749297329 +0000 UTC m=+0.110723734 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, tcib_managed=true) Dec 2 03:07:12 localhost python3[57496]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:07:12 localhost systemd[1]: Reloading. Dec 2 03:07:12 localhost systemd-sysv-generator[57556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:07:12 localhost systemd-rc-local-generator[57549]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:07:12 localhost podman[57497]: 2025-12-02 08:07:12.953644573 +0000 UTC m=+0.315070968 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:07:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:07:13 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:07:13 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.a scrub starts Dec 2 03:07:13 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.a scrub ok Dec 2 03:07:13 localhost python3[57612]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:13 localhost python3[57630]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:14 localhost python3[57692]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:14 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub starts Dec 2 03:07:14 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub ok Dec 2 03:07:14 localhost python3[57710]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:15 localhost python3[57740]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:07:15 localhost systemd[1]: Reloading. Dec 2 03:07:15 localhost systemd-rc-local-generator[57765]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:07:15 localhost systemd-sysv-generator[57768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:07:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:07:15 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.1a scrub starts Dec 2 03:07:15 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.1a scrub ok Dec 2 03:07:15 localhost systemd[1]: Starting Create netns directory... Dec 2 03:07:15 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:07:15 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:07:15 localhost systemd[1]: Finished Create netns directory. Dec 2 03:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5093 writes, 22K keys, 5093 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5093 writes, 477 syncs, 10.68 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1842 writes, 6555 keys, 1842 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s#012Interval WAL: 1842 writes, 336 syncs, 5.48 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Dec 2 03:07:16 localhost python3[57799]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:07:17 localhost python3[57857]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:07:17 localhost podman[57932]: 2025-12-02 08:07:17.869194855 +0000 UTC m=+0.063726045 container create 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:07:17 localhost podman[57933]: 2025-12-02 08:07:17.901564979 +0000 UTC m=+0.091025273 container create 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step2, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:07:17 localhost systemd[1]: Started libpod-conmon-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope. Dec 2 03:07:17 localhost systemd[1]: Started libcrun container. Dec 2 03:07:17 localhost systemd[1]: Started libpod-conmon-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope. Dec 2 03:07:17 localhost podman[57932]: 2025-12-02 08:07:17.832225779 +0000 UTC m=+0.026756999 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:07:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:17 localhost systemd[1]: Started libcrun container. Dec 2 03:07:17 localhost podman[57933]: 2025-12-02 08:07:17.85062954 +0000 UTC m=+0.040089824 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:07:17 localhost podman[57932]: 2025-12-02 08:07:17.949452563 +0000 UTC m=+0.143983753 container init 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute_init_log, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:07:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:17 localhost podman[57932]: 2025-12-02 08:07:17.956898834 +0000 UTC m=+0.151430024 container start 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute_init_log, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:07:17 localhost python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Dec 2 03:07:17 localhost systemd[1]: libpod-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[57968]: 2025-12-02 08:07:18.010700452 +0000 UTC m=+0.039625159 container died 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=nova_compute_init_log, release=1761123044) Dec 2 03:07:18 localhost podman[57933]: 2025-12-02 08:07:18.059470533 +0000 UTC m=+0.248930807 container init 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, container_name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:35:22Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:07:18 localhost systemd[1]: libpod-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[57969]: 2025-12-02 08:07:18.147065408 +0000 UTC m=+0.171400914 container cleanup 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:07:18 localhost systemd[1]: libpod-conmon-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[57933]: 2025-12-02 08:07:18.167531763 +0000 UTC m=+0.356992087 container start 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step2, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public) Dec 2 03:07:18 localhost python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Dec 2 03:07:18 localhost podman[58005]: 2025-12-02 08:07:18.170285478 +0000 UTC m=+0.086811722 container died 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step2, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Dec 2 03:07:18 localhost podman[58005]: 2025-12-02 08:07:18.199330048 +0000 UTC m=+0.115856252 container cleanup 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step2) Dec 2 03:07:18 localhost systemd[1]: libpod-conmon-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[58117]: 2025-12-02 08:07:18.55458889 +0000 UTC m=+0.079001970 container create 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, container_name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=) Dec 2 03:07:18 localhost podman[58128]: 2025-12-02 08:07:18.576635083 +0000 UTC m=+0.076516273 container create 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, vendor=Red Hat, Inc., config_id=tripleo_step2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:07:18 localhost systemd[1]: Started libpod-conmon-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope. Dec 2 03:07:18 localhost systemd[1]: Started libpod-conmon-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope. Dec 2 03:07:18 localhost systemd[1]: Started libcrun container. Dec 2 03:07:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:18 localhost systemd[1]: Started libcrun container. Dec 2 03:07:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:18 localhost podman[58117]: 2025-12-02 08:07:18.514828558 +0000 UTC m=+0.039241728 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:07:18 localhost podman[58117]: 2025-12-02 08:07:18.613979111 +0000 UTC m=+0.138392191 container init 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 2 03:07:18 localhost podman[58128]: 2025-12-02 08:07:18.622103653 +0000 UTC m=+0.121984833 container init 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=create_haproxy_wrapper, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:07:18 localhost podman[58128]: 2025-12-02 08:07:18.535138427 +0000 UTC m=+0.035019627 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:07:18 localhost podman[58128]: 2025-12-02 08:07:18.65425236 +0000 UTC m=+0.154133540 container start 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}) Dec 2 03:07:18 localhost podman[58128]: 2025-12-02 08:07:18.654863518 +0000 UTC m=+0.154744788 container attach 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git) Dec 2 03:07:18 localhost podman[58117]: 2025-12-02 08:07:18.674478626 +0000 UTC m=+0.198891716 container start 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, container_name=create_virtlogd_wrapper, config_id=tripleo_step2, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:07:18 localhost podman[58117]: 2025-12-02 08:07:18.674807706 +0000 UTC m=+0.199220856 container attach 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=create_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:07:18 localhost systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully. Dec 2 03:07:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:18 localhost systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully. Dec 2 03:07:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:19 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.e scrub starts Dec 2 03:07:19 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.e scrub ok Dec 2 03:07:19 localhost ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.809239388s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1216.785766602s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:19 localhost ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.809184074s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1216.785766602s@ mbc={}] state: transitioning to Stray Dec 2 03:07:19 localhost ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.812079430s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1216.789062500s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:19 localhost ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.812050819s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1216.789062500s@ mbc={}] state: transitioning to Stray Dec 2 03:07:19 localhost ceph-osd[31770]: osd.1 pg_epoch: 59 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:19 localhost ceph-osd[31770]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:20 localhost ovs-vsctl[58240]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 2 03:07:20 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.15 scrub starts Dec 2 03:07:20 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.15 scrub ok Dec 2 03:07:20 localhost systemd[1]: libpod-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Deactivated successfully. Dec 2 03:07:20 localhost systemd[1]: libpod-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Consumed 2.186s CPU time. Dec 2 03:07:20 localhost podman[58117]: 2025-12-02 08:07:20.814500589 +0000 UTC m=+2.338913759 container died 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step2, container_name=create_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-libvirt) Dec 2 03:07:20 localhost ceph-osd[31770]: osd.1 pg_epoch: 60 pg[7.7( v 42'39 lc 42'21 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:20 localhost ceph-osd[31770]: osd.1 pg_epoch: 60 pg[7.f( v 42'39 lc 42'1 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:20 localhost systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully. Dec 2 03:07:20 localhost podman[58373]: 2025-12-02 08:07:20.899789742 +0000 UTC m=+0.076079519 container cleanup 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, build-date=2025-11-19T00:35:22Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=create_virtlogd_wrapper, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible) Dec 2 03:07:20 localhost systemd[1]: libpod-conmon-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Deactivated successfully. Dec 2 03:07:20 localhost python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Dec 2 03:07:21 localhost systemd[1]: libpod-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Deactivated successfully. Dec 2 03:07:21 localhost systemd[1]: libpod-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Consumed 2.170s CPU time. Dec 2 03:07:21 localhost podman[58128]: 2025-12-02 08:07:21.47432979 +0000 UTC m=+2.974210970 container died 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:07:21 localhost podman[58414]: 2025-12-02 08:07:21.535137865 +0000 UTC m=+0.051488106 container cleanup 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:07:21 localhost systemd[1]: libpod-conmon-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Deactivated successfully. Dec 2 03:07:21 localhost python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Dec 2 03:07:21 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.7 scrub starts Dec 2 03:07:21 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.7 scrub ok Dec 2 03:07:21 localhost ceph-osd[31770]: osd.1 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.421516418s) [3,4,5] r=-1 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1222.979248047s@ mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:21 localhost ceph-osd[31770]: osd.1 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.420804977s) [3,4,5] r=-1 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1222.979248047s@ mbc={}] state: transitioning to Stray Dec 2 03:07:21 localhost systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully. Dec 2 03:07:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:22 localhost python3[58469]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:22 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.7 scrub starts Dec 2 03:07:22 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.7 scrub ok Dec 2 03:07:22 localhost sshd[58484]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:07:22 localhost ceph-osd[32707]: osd.4 pg_epoch: 61 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61) [3,4,5] r=1 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:23 localhost python3[58592]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005541914 step=2 update_config_hash_only=False Dec 2 03:07:24 localhost python3[58608]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:24 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub starts Dec 2 03:07:24 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub ok Dec 2 03:07:24 localhost python3[58624]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:07:27 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.f scrub starts Dec 2 03:07:27 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.f scrub ok Dec 2 03:07:29 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub starts Dec 2 03:07:29 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub ok Dec 2 03:07:29 localhost ceph-osd[32707]: osd.4 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=0 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.938389778s) [0,2,4] r=2 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1222.738525391s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:29 localhost ceph-osd[32707]: osd.4 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=0 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.938322067s) [0,2,4] r=2 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1222.738525391s@ mbc={}] state: transitioning to Stray Dec 2 03:07:31 localhost ceph-osd[31770]: osd.1 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.810658455s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1229.338745117s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:31 localhost ceph-osd[31770]: osd.1 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.810417175s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1229.338745117s@ mbc={}] state: transitioning to Stray Dec 2 03:07:32 localhost ceph-osd[32707]: osd.4 pg_epoch: 65 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65) [2,0,4] r=2 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:33 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub starts Dec 2 03:07:33 localhost ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub ok Dec 2 03:07:34 localhost ceph-osd[32707]: osd.4 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.406540871s) [3,1,2] r=-1 lpr=67 pi=[51,67)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1232.786865234s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:34 localhost ceph-osd[32707]: osd.4 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.406447411s) [3,1,2] r=-1 lpr=67 pi=[51,67)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1232.786865234s@ mbc={}] state: transitioning to Stray Dec 2 03:07:35 localhost ceph-osd[31770]: osd.1 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.075953484s) [1,3,2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1239.598876953s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:35 localhost ceph-osd[31770]: osd.1 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.075953484s) [1,3,2] r=0 lpr=68 pi=[53,68)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1239.598876953s@ mbc={}] state: transitioning to Primary Dec 2 03:07:35 localhost ceph-osd[31770]: osd.1 pg_epoch: 67 pg[7.b( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67) [3,1,2] r=1 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:36 localhost sshd[58625]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:07:37 localhost ceph-osd[31770]: osd.1 pg_epoch: 69 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=68/69 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68) [1,3,2] r=0 lpr=68 pi=[53,68)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:38 localhost ceph-osd[32707]: osd.4 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.525090218s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1234.853393555s@ mbc={}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:38 localhost ceph-osd[32707]: osd.4 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.525009155s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1234.853393555s@ mbc={}] state: transitioning to Stray Dec 2 03:07:38 localhost ceph-osd[31770]: osd.1 pg_epoch: 70 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70) [1,3,5] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:39 localhost ceph-osd[31770]: osd.1 pg_epoch: 71 pg[7.d( v 42'39 lc 42'13 (0'0,42'39] local-lis/les=70/71 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70) [1,3,5] r=0 lpr=70 pi=[55,70)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+3)=2}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:42 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.d scrub starts Dec 2 03:07:42 localhost ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.d scrub ok Dec 2 03:07:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:07:44 localhost podman[58628]: 2025-12-02 08:07:44.263835397 +0000 UTC m=+0.266400379 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1761123044, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:07:44 localhost podman[58628]: 2025-12-02 08:07:44.455986403 +0000 UTC m=+0.458551315 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 2 03:07:44 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:07:45 localhost ceph-osd[32707]: osd.4 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.034337997s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1244.863769531s@ mbc={}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:45 localhost ceph-osd[32707]: osd.4 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.034082413s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1244.863769531s@ mbc={}] state: transitioning to Stray Dec 2 03:07:46 localhost ceph-osd[31770]: osd.1 pg_epoch: 72 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72) [3,5,1] r=2 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:47 localhost ceph-osd[31770]: osd.1 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=3 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.103336334s) [0,5,1] r=2 lpr=74 pi=[59,74)/1 crt=42'39 mlcod 0'0 active pruub 1249.551757812s@ mbc={255={}}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:47 localhost ceph-osd[31770]: osd.1 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=3 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.102553368s) [0,5,1] r=2 lpr=74 pi=[59,74)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1249.551757812s@ mbc={}] state: transitioning to Stray Dec 2 03:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:08:15 localhost podman[58733]: 2025-12-02 08:08:15.07651386 +0000 UTC m=+0.080011530 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Dec 2 03:08:15 localhost podman[58733]: 2025-12-02 08:08:15.278446179 +0000 UTC m=+0.281943849 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1) Dec 2 03:08:15 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:08:38 localhost sshd[58763]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:08:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:08:46 localhost podman[58765]: 2025-12-02 08:08:46.064653741 +0000 UTC m=+0.075554038 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true) Dec 2 03:08:46 localhost podman[58765]: 2025-12-02 08:08:46.28716319 +0000 UTC m=+0.298063557 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team) Dec 2 03:08:46 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:08:50 localhost sshd[58796]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:08:58 localhost sshd[58798]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:09:03 localhost systemd[1]: tmp-crun.jwjeZo.mount: Deactivated successfully. Dec 2 03:09:03 localhost podman[58901]: 2025-12-02 08:09:03.420898484 +0000 UTC m=+0.085854573 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 03:09:03 localhost podman[58901]: 2025-12-02 08:09:03.55180466 +0000 UTC m=+0.216760699 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, distribution-scope=public, name=rhceph, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Dec 2 03:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:09:17 localhost podman[59044]: 2025-12-02 08:09:17.128214043 +0000 UTC m=+0.129160022 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, distribution-scope=public, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64) Dec 2 03:09:17 localhost podman[59044]: 2025-12-02 08:09:17.356995861 +0000 UTC m=+0.357941910 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:09:17 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:09:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:09:48 localhost podman[59073]: 2025-12-02 08:09:48.112618056 +0000 UTC m=+0.113688244 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Dec 2 03:09:48 localhost podman[59073]: 2025-12-02 08:09:48.340761283 +0000 UTC m=+0.341831441 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Dec 2 03:09:48 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:09:54 localhost sshd[59102]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:10:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:10:19 localhost systemd[1]: tmp-crun.7bsfE0.mount: Deactivated successfully. Dec 2 03:10:19 localhost podman[59181]: 2025-12-02 08:10:19.077881411 +0000 UTC m=+0.082578969 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z) Dec 2 03:10:19 localhost podman[59181]: 2025-12-02 08:10:19.285835072 +0000 UTC m=+0.290532640 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044) Dec 2 03:10:19 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:10:20 localhost sshd[59210]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:10:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:10:50 localhost systemd[1]: tmp-crun.Rx9uhK.mount: Deactivated successfully. Dec 2 03:10:50 localhost podman[59212]: 2025-12-02 08:10:50.079725835 +0000 UTC m=+0.086357865 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible) Dec 2 03:10:50 localhost podman[59212]: 2025-12-02 08:10:50.264023099 +0000 UTC m=+0.270655089 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:10:50 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:10:52 localhost sshd[59242]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:11:16 localhost sshd[59321]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:11:21 localhost podman[59323]: 2025-12-02 08:11:21.067152186 +0000 UTC m=+0.074167661 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:11:21 localhost podman[59323]: 2025-12-02 08:11:21.257157621 +0000 UTC m=+0.264173126 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:11:21 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:11:42 localhost sshd[59351]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:11:52 localhost podman[59353]: 2025-12-02 08:11:52.074133162 +0000 UTC m=+0.081527952 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:11:52 localhost podman[59353]: 2025-12-02 08:11:52.286527184 +0000 UTC m=+0.293921984 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:11:52 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:12:10 localhost python3[59505]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:11 localhost python3[59550]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663130.5146744-98264-70616013773059/source _original_basename=tmp0rxr48ke follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:12 localhost python3[59580]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:14 localhost ansible-async_wrapper.py[59752]: Invoked with 925576933445 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663133.6802697-98474-155704683432526/AnsiballZ_command.py _ Dec 2 03:12:14 localhost ansible-async_wrapper.py[59755]: Starting module and watcher Dec 2 03:12:14 localhost ansible-async_wrapper.py[59755]: Start watching 59756 (3600) Dec 2 03:12:14 localhost ansible-async_wrapper.py[59756]: Start module (59756) Dec 2 03:12:14 localhost ansible-async_wrapper.py[59752]: Return async_wrapper task started. Dec 2 03:12:14 localhost python3[59774]: ansible-ansible.legacy.async_status Invoked with jid=925576933445.59752 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:12:17 localhost puppet-user[59776]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:12:17 localhost puppet-user[59776]: (file: /etc/puppet/hiera.yaml) Dec 2 03:12:17 localhost puppet-user[59776]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:12:17 localhost puppet-user[59776]: (file & line not available) Dec 2 03:12:17 localhost puppet-user[59776]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:12:17 localhost puppet-user[59776]: (file & line not available) Dec 2 03:12:17 localhost puppet-user[59776]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:12:18 localhost puppet-user[59776]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:12:18 localhost puppet-user[59776]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.11 seconds Dec 2 03:12:18 localhost puppet-user[59776]: Notice: Applied catalog in 0.04 seconds Dec 2 03:12:18 localhost puppet-user[59776]: Application: Dec 2 03:12:18 localhost puppet-user[59776]: Initial environment: production Dec 2 03:12:18 localhost puppet-user[59776]: Converged environment: production Dec 2 03:12:18 localhost puppet-user[59776]: Run mode: user Dec 2 03:12:18 localhost puppet-user[59776]: Changes: Dec 2 03:12:18 localhost puppet-user[59776]: Events: Dec 2 03:12:18 localhost puppet-user[59776]: Resources: Dec 2 03:12:18 localhost puppet-user[59776]: Total: 10 Dec 2 03:12:18 localhost puppet-user[59776]: Time: Dec 2 03:12:18 localhost puppet-user[59776]: Schedule: 0.00 Dec 2 03:12:18 localhost puppet-user[59776]: File: 0.00 Dec 2 03:12:18 localhost puppet-user[59776]: Exec: 0.01 Dec 2 03:12:18 localhost puppet-user[59776]: Augeas: 0.01 Dec 2 03:12:18 localhost puppet-user[59776]: Transaction evaluation: 0.03 Dec 2 03:12:18 localhost puppet-user[59776]: Catalog application: 0.04 Dec 2 03:12:18 localhost puppet-user[59776]: Config retrieval: 0.14 Dec 2 03:12:18 localhost puppet-user[59776]: Last run: 1764663138 Dec 2 03:12:18 localhost puppet-user[59776]: Filebucket: 0.00 Dec 2 03:12:18 localhost puppet-user[59776]: Total: 0.04 Dec 2 03:12:18 localhost puppet-user[59776]: Version: Dec 2 03:12:18 localhost puppet-user[59776]: Config: 1764663137 Dec 2 03:12:18 localhost puppet-user[59776]: Puppet: 7.10.0 Dec 2 03:12:18 localhost ansible-async_wrapper.py[59756]: Module complete (59756) Dec 2 03:12:19 localhost ansible-async_wrapper.py[59755]: Done in kid B. Dec 2 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:12:23 localhost podman[59888]: 2025-12-02 08:12:23.08633501 +0000 UTC m=+0.087371315 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:12:23 localhost podman[59888]: 2025-12-02 08:12:23.293897328 +0000 UTC m=+0.294933653 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:12:23 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:12:24 localhost python3[59933]: ansible-ansible.legacy.async_status Invoked with jid=925576933445.59752 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:12:25 localhost python3[59949]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:12:26 localhost python3[59965]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:27 localhost python3[60015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:27 localhost python3[60033]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp4xfhx6hf recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:12:27 localhost python3[60063]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:28 localhost python3[60166]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:12:29 localhost python3[60185]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:31 localhost python3[60218]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:31 localhost python3[60268]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:32 localhost python3[60286]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:32 localhost python3[60348]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:32 localhost python3[60366]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:33 localhost python3[60428]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:33 localhost python3[60446]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:34 localhost python3[60508]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:34 localhost sshd[60527]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:12:34 localhost python3[60526]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:35 localhost python3[60558]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:35 localhost systemd[1]: Reloading. Dec 2 03:12:35 localhost systemd-sysv-generator[60588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:35 localhost systemd-rc-local-generator[60582]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:35 localhost python3[60644]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:36 localhost python3[60662]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:36 localhost python3[60725]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:37 localhost python3[60743]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:37 localhost python3[60773]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:37 localhost systemd[1]: Reloading. Dec 2 03:12:37 localhost systemd-rc-local-generator[60798]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:37 localhost systemd-sysv-generator[60804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:37 localhost systemd[1]: Starting Create netns directory... Dec 2 03:12:37 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:12:37 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:12:37 localhost systemd[1]: Finished Create netns directory. Dec 2 03:12:38 localhost python3[60831]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:12:40 localhost python3[60889]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:12:40 localhost podman[61059]: 2025-12-02 08:12:40.905858877 +0000 UTC m=+0.057951329 container create 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_init_log, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:12:40 localhost podman[61058]: 2025-12-02 08:12:40.932968286 +0000 UTC m=+0.088052867 container create 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope. Dec 2 03:12:40 localhost podman[61076]: 2025-12-02 08:12:40.950813448 +0000 UTC m=+0.092287033 container create 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_statedir_owner, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42/merged/scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61081]: 2025-12-02 08:12:40.970885646 +0000 UTC m=+0.106286030 container create fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_virtlogd_wrapper, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container) Dec 2 03:12:40 localhost podman[61059]: 2025-12-02 08:12:40.873288066 +0000 UTC m=+0.025380518 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61058]: 2025-12-02 08:12:40.879603115 +0000 UTC m=+0.034687716 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:12:40 localhost podman[61059]: 2025-12-02 08:12:40.982395679 +0000 UTC m=+0.134488131 container init 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, container_name=ceilometer_init_log, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:12:40 localhost podman[61076]: 2025-12-02 08:12:40.888311805 +0000 UTC m=+0.029785430 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:12:40 localhost podman[61059]: 2025-12-02 08:12:40.990001436 +0000 UTC m=+0.142093878 container start 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope. Dec 2 03:12:40 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Dec 2 03:12:40 localhost systemd[1]: libpod-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope: Deactivated successfully. Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope. Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost podman[61081]: 2025-12-02 08:12:40.903025303 +0000 UTC m=+0.038425677 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost podman[61077]: 2025-12-02 08:12:40.903904129 +0000 UTC m=+0.040812177 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[61076]: 2025-12-02 08:12:41.01296837 +0000 UTC m=+0.154442005 container init 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.12, architecture=x86_64) Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost podman[61076]: 2025-12-02 08:12:41.019724022 +0000 UTC m=+0.161197647 container start 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_statedir_owner, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:41 localhost podman[61076]: 2025-12-02 08:12:41.020019191 +0000 UTC m=+0.161492816 container attach 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_statedir_owner, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[61077]: 2025-12-02 08:12:41.028665639 +0000 UTC m=+0.165573687 container create 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git) Dec 2 03:12:41 localhost podman[61140]: 2025-12-02 08:12:41.064580999 +0000 UTC m=+0.057882066 container died 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-type=git, distribution-scope=public, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_init_log, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:41 localhost podman[61076]: 2025-12-02 08:12:41.067079254 +0000 UTC m=+0.208552879 container died 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_statedir_owner, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope. Dec 2 03:12:41 localhost systemd[1]: libpod-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope: Deactivated successfully. Dec 2 03:12:41 localhost podman[61081]: 2025-12-02 08:12:41.079175424 +0000 UTC m=+0.214575778 container init fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_virtlogd_wrapper, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:12:41 localhost podman[61058]: 2025-12-02 08:12:41.101285884 +0000 UTC m=+0.256370545 container init 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:12:41 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:41 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:12:41 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:12:41 localhost podman[61081]: 2025-12-02 08:12:41.140767701 +0000 UTC m=+0.276168055 container start fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:12:41 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:41 localhost podman[61176]: 2025-12-02 08:12:41.14478186 +0000 UTC m=+0.067578716 container cleanup 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_statedir_owner, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute) Dec 2 03:12:41 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost systemd[1]: libpod-conmon-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:12:41 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:12:41 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Dec 2 03:12:41 localhost podman[61058]: 2025-12-02 08:12:41.240565976 +0000 UTC m=+0.395650557 container start 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:12:41 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:12:41 localhost podman[61201]: 2025-12-02 08:12:41.280542707 +0000 UTC m=+0.141323394 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.) Dec 2 03:12:41 localhost podman[61077]: 2025-12-02 08:12:41.291046561 +0000 UTC m=+0.427954599 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git) Dec 2 03:12:41 localhost podman[61077]: 2025-12-02 08:12:41.297656898 +0000 UTC m=+0.434564936 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:12:41 localhost systemd[61217]: Queued start job for default target Main User Target. Dec 2 03:12:41 localhost systemd[61217]: Created slice User Application Slice. Dec 2 03:12:41 localhost systemd[61217]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:12:41 localhost systemd[61217]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:12:41 localhost systemd[61217]: Reached target Paths. Dec 2 03:12:41 localhost systemd[61217]: Reached target Timers. Dec 2 03:12:41 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=96606bb2d91ec59ed336cbd6010f1851 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:12:41 localhost systemd[61217]: Starting D-Bus User Message Bus Socket... Dec 2 03:12:41 localhost systemd[61217]: Starting Create User's Volatile Files and Directories... Dec 2 03:12:41 localhost systemd[61217]: Listening on D-Bus User Message Bus Socket. Dec 2 03:12:41 localhost systemd[61217]: Finished Create User's Volatile Files and Directories. Dec 2 03:12:41 localhost systemd[61217]: Reached target Sockets. Dec 2 03:12:41 localhost systemd[61217]: Reached target Basic System. Dec 2 03:12:41 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:12:41 localhost systemd[61217]: Reached target Main User Target. Dec 2 03:12:41 localhost systemd[61217]: Startup finished in 117ms. Dec 2 03:12:41 localhost podman[61201]: 2025-12-02 08:12:41.312653484 +0000 UTC m=+0.173434121 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, version=17.1.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, release=1761123044) Dec 2 03:12:41 localhost systemd[1]: Started Session c1 of User root. Dec 2 03:12:41 localhost systemd[1]: Started Session c2 of User root. Dec 2 03:12:41 localhost podman[61201]: unhealthy Dec 2 03:12:41 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:12:41 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed with result 'exit-code'. Dec 2 03:12:41 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: session-c2.scope: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: session-c1.scope: Deactivated successfully. Dec 2 03:12:41 localhost podman[61315]: 2025-12-02 08:12:41.430285141 +0000 UTC m=+0.041813278 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, container_name=rsyslog, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 2 03:12:41 localhost podman[61144]: 2025-12-02 08:12:41.443791084 +0000 UTC m=+0.437101562 container cleanup 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public) Dec 2 03:12:41 localhost podman[61315]: 2025-12-02 08:12:41.447088732 +0000 UTC m=+0.058616839 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 03:12:41 localhost systemd[1]: libpod-conmon-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: libpod-conmon-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:12:41 localhost podman[61415]: 2025-12-02 08:12:41.624418859 +0000 UTC m=+0.053458035 container create c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope. Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[61415]: 2025-12-02 08:12:41.676320576 +0000 UTC m=+0.105359762 container init c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:12:41 localhost podman[61415]: 2025-12-02 08:12:41.681273854 +0000 UTC m=+0.110313050 container start c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:41 localhost podman[61415]: 2025-12-02 08:12:41.603056992 +0000 UTC m=+0.032096178 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost podman[61471]: 2025-12-02 08:12:41.798315713 +0000 UTC m=+0.078521362 container create c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, container_name=nova_virtsecretd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope. Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost podman[61471]: 2025-12-02 08:12:41.754654321 +0000 UTC m=+0.034860000 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[61471]: 2025-12-02 08:12:41.864200267 +0000 UTC m=+0.144405916 container init c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtsecretd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public) Dec 2 03:12:41 localhost podman[61471]: 2025-12-02 08:12:41.874730701 +0000 UTC m=+0.154936350 container start c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:12:41 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458-userdata-shm.mount: Deactivated successfully. Dec 2 03:12:41 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:41 localhost systemd[1]: Started Session c3 of User root. Dec 2 03:12:42 localhost systemd[1]: session-c3.scope: Deactivated successfully. Dec 2 03:12:42 localhost podman[61606]: 2025-12-02 08:12:42.321320324 +0000 UTC m=+0.080162050 container create f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:12:42 localhost systemd[1]: Started libpod-conmon-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope. Dec 2 03:12:42 localhost podman[61619]: 2025-12-02 08:12:42.372784209 +0000 UTC m=+0.101291711 container create 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:42 localhost systemd[1]: Started libcrun container. Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e/merged/etc/target supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost podman[61606]: 2025-12-02 08:12:42.289563008 +0000 UTC m=+0.048404754 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:12:42 localhost systemd[1]: Started libpod-conmon-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope. Dec 2 03:12:42 localhost podman[61619]: 2025-12-02 08:12:42.319929483 +0000 UTC m=+0.048437015 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:12:42 localhost podman[61606]: 2025-12-02 08:12:42.425612004 +0000 UTC m=+0.184453730 container init f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc.) Dec 2 03:12:42 localhost systemd[1]: Started libcrun container. Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:12:42 localhost podman[61619]: 2025-12-02 08:12:42.447170486 +0000 UTC m=+0.175677948 container init 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:12:42 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:42 localhost podman[61619]: 2025-12-02 08:12:42.457974488 +0000 UTC m=+0.186481940 container start 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3) Dec 2 03:12:42 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost systemd[1]: Started Session c4 of User root. Dec 2 03:12:42 localhost podman[61606]: 2025-12-02 08:12:42.502910128 +0000 UTC m=+0.261751844 container start f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:12:42 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d89676d7ec0a7c13ef9894fdb26c6e3a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:12:42 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:42 localhost systemd[1]: Started Session c5 of User root. Dec 2 03:12:42 localhost podman[61649]: 2025-12-02 08:12:42.552889598 +0000 UTC m=+0.096416115 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, container_name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 03:12:42 localhost systemd[1]: session-c4.scope: Deactivated successfully. Dec 2 03:12:42 localhost kernel: Loading iSCSI transport class v2.0-870. Dec 2 03:12:42 localhost systemd[1]: session-c5.scope: Deactivated successfully. Dec 2 03:12:42 localhost podman[61649]: 2025-12-02 08:12:42.62273794 +0000 UTC m=+0.166264427 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:12:42 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:12:43 localhost podman[61787]: 2025-12-02 08:12:43.058551852 +0000 UTC m=+0.092985923 container create f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.openshift.expose-services=, container_name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z) Dec 2 03:12:43 localhost podman[61787]: 2025-12-02 08:12:43.009780838 +0000 UTC m=+0.044214939 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost systemd[1]: Started libpod-conmon-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope. Dec 2 03:12:43 localhost systemd[1]: Started libcrun container. Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost podman[61787]: 2025-12-02 08:12:43.164576333 +0000 UTC m=+0.199010394 container init f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container) Dec 2 03:12:43 localhost podman[61787]: 2025-12-02 08:12:43.178716584 +0000 UTC m=+0.213150645 container start f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:12:43 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:43 localhost systemd[1]: Started Session c6 of User root. Dec 2 03:12:43 localhost systemd[1]: session-c6.scope: Deactivated successfully. Dec 2 03:12:43 localhost podman[61890]: 2025-12-02 08:12:43.595931352 +0000 UTC m=+0.070031559 container create cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, container_name=nova_virtqemud, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.12) Dec 2 03:12:43 localhost systemd[1]: Started libpod-conmon-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope. Dec 2 03:12:43 localhost systemd[1]: Started libcrun container. Dec 2 03:12:43 localhost podman[61890]: 2025-12-02 08:12:43.553170618 +0000 UTC m=+0.027270815 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost podman[61890]: 2025-12-02 08:12:43.660962642 +0000 UTC m=+0.135062849 container init cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 2 03:12:43 localhost podman[61890]: 2025-12-02 08:12:43.671729032 +0000 UTC m=+0.145829249 container start cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, release=1761123044, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:12:43 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:43 localhost systemd[1]: Started Session c7 of User root. Dec 2 03:12:43 localhost systemd[1]: session-c7.scope: Deactivated successfully. Dec 2 03:12:44 localhost podman[61991]: 2025-12-02 08:12:44.109046859 +0000 UTC m=+0.084012346 container create f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, container_name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git) Dec 2 03:12:44 localhost systemd[1]: Started libpod-conmon-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope. Dec 2 03:12:44 localhost podman[61991]: 2025-12-02 08:12:44.061223613 +0000 UTC m=+0.036189120 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:44 localhost systemd[1]: Started libcrun container. Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:44 localhost podman[61991]: 2025-12-02 08:12:44.18690402 +0000 UTC m=+0.161869507 container init f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1) Dec 2 03:12:44 localhost podman[61991]: 2025-12-02 08:12:44.197082554 +0000 UTC m=+0.172048041 container start f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:44 localhost python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:44 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:12:44 localhost systemd[1]: Started Session c8 of User root. Dec 2 03:12:44 localhost systemd[1]: session-c8.scope: Deactivated successfully. Dec 2 03:12:44 localhost python3[62072]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62088]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62104]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62120]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62136]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:46 localhost python3[62152]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:46 localhost python3[62168]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:46 localhost python3[62184]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:46 localhost python3[62200]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:47 localhost python3[62217]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62233]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62249]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62265]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62281]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62297]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62313]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62329]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:49 localhost python3[62345]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:49 localhost python3[62406]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:50 localhost python3[62435]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:50 localhost python3[62464]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:51 localhost python3[62493]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:51 localhost python3[62522]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:52 localhost python3[62551]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:52 localhost python3[62580]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:53 localhost python3[62609]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:12:53 localhost podman[62638]: 2025-12-02 08:12:53.947623032 +0000 UTC m=+0.092531031 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z) Dec 2 03:12:53 localhost python3[62639]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:54 localhost podman[62638]: 2025-12-02 08:12:54.199339095 +0000 UTC m=+0.344247114 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:54 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:12:54 localhost python3[62682]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:12:54 localhost systemd[1]: Reloading. Dec 2 03:12:54 localhost systemd-rc-local-generator[62703]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:54 localhost systemd-sysv-generator[62707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:54 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:12:54 localhost systemd[61217]: Activating special unit Exit the Session... Dec 2 03:12:54 localhost systemd[61217]: Stopped target Main User Target. Dec 2 03:12:54 localhost systemd[61217]: Stopped target Basic System. Dec 2 03:12:54 localhost systemd[61217]: Stopped target Paths. Dec 2 03:12:54 localhost systemd[61217]: Stopped target Sockets. Dec 2 03:12:54 localhost systemd[61217]: Stopped target Timers. Dec 2 03:12:54 localhost systemd[61217]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:12:54 localhost systemd[61217]: Closed D-Bus User Message Bus Socket. Dec 2 03:12:54 localhost systemd[61217]: Stopped Create User's Volatile Files and Directories. Dec 2 03:12:54 localhost systemd[61217]: Removed slice User Application Slice. Dec 2 03:12:54 localhost systemd[61217]: Reached target Shutdown. Dec 2 03:12:54 localhost systemd[61217]: Finished Exit the Session. Dec 2 03:12:54 localhost systemd[61217]: Reached target Exit the Session. Dec 2 03:12:54 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:12:54 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:12:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:12:54 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:12:54 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:12:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:12:54 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:12:55 localhost python3[62736]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:55 localhost systemd[1]: Reloading. Dec 2 03:12:55 localhost systemd-sysv-generator[62768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:55 localhost systemd-rc-local-generator[62765]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:55 localhost systemd[1]: Starting collectd container... Dec 2 03:12:55 localhost systemd[1]: Started collectd container. Dec 2 03:12:56 localhost python3[62801]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:56 localhost systemd[1]: Reloading. Dec 2 03:12:56 localhost systemd-rc-local-generator[62825]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:56 localhost systemd-sysv-generator[62831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:56 localhost systemd[1]: Starting iscsid container... Dec 2 03:12:56 localhost systemd[1]: Started iscsid container. Dec 2 03:12:57 localhost python3[62867]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:57 localhost systemd[1]: Reloading. Dec 2 03:12:57 localhost systemd-rc-local-generator[62896]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:57 localhost systemd-sysv-generator[62900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:57 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Dec 2 03:12:57 localhost systemd[1]: Started nova_virtlogd_wrapper container. Dec 2 03:12:57 localhost sshd[62918]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:12:58 localhost python3[62935]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:58 localhost systemd[1]: Reloading. Dec 2 03:12:58 localhost systemd-sysv-generator[62966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:58 localhost systemd-rc-local-generator[62960]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:58 localhost systemd[1]: Starting nova_virtnodedevd container... Dec 2 03:12:58 localhost tripleo-start-podman-container[62975]: Creating additional drop-in dependency for "nova_virtnodedevd" (380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3) Dec 2 03:12:58 localhost systemd[1]: Reloading. Dec 2 03:12:58 localhost systemd-rc-local-generator[63031]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:58 localhost systemd-sysv-generator[63036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:59 localhost systemd[1]: Started nova_virtnodedevd container. Dec 2 03:12:59 localhost python3[63058]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:59 localhost systemd[1]: Reloading. Dec 2 03:13:00 localhost systemd-rc-local-generator[63085]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:00 localhost systemd-sysv-generator[63088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:00 localhost systemd[1]: Starting nova_virtproxyd container... Dec 2 03:13:00 localhost tripleo-start-podman-container[63098]: Creating additional drop-in dependency for "nova_virtproxyd" (f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358) Dec 2 03:13:00 localhost systemd[1]: Reloading. Dec 2 03:13:00 localhost systemd-rc-local-generator[63155]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:00 localhost systemd-sysv-generator[63158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:00 localhost systemd[1]: Started nova_virtproxyd container. Dec 2 03:13:02 localhost python3[63182]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:02 localhost systemd[1]: Reloading. Dec 2 03:13:02 localhost systemd-rc-local-generator[63209]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:02 localhost systemd-sysv-generator[63213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:03 localhost systemd[1]: Starting nova_virtqemud container... Dec 2 03:13:03 localhost tripleo-start-podman-container[63221]: Creating additional drop-in dependency for "nova_virtqemud" (cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7) Dec 2 03:13:03 localhost systemd[1]: Reloading. Dec 2 03:13:03 localhost systemd-rc-local-generator[63277]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:03 localhost systemd-sysv-generator[63282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:03 localhost systemd[1]: Started nova_virtqemud container. Dec 2 03:13:04 localhost python3[63306]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:04 localhost systemd[1]: Reloading. Dec 2 03:13:04 localhost systemd-sysv-generator[63335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:04 localhost systemd-rc-local-generator[63331]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:04 localhost systemd[1]: Starting nova_virtsecretd container... Dec 2 03:13:04 localhost tripleo-start-podman-container[63345]: Creating additional drop-in dependency for "nova_virtsecretd" (c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10) Dec 2 03:13:04 localhost systemd[1]: Reloading. Dec 2 03:13:04 localhost systemd-rc-local-generator[63405]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:04 localhost systemd-sysv-generator[63408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:05 localhost systemd[1]: Started nova_virtsecretd container. Dec 2 03:13:05 localhost python3[63429]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:05 localhost systemd[1]: Reloading. Dec 2 03:13:05 localhost systemd-rc-local-generator[63457]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:05 localhost systemd-sysv-generator[63461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:06 localhost systemd[1]: Starting nova_virtstoraged container... Dec 2 03:13:06 localhost tripleo-start-podman-container[63469]: Creating additional drop-in dependency for "nova_virtstoraged" (f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379) Dec 2 03:13:06 localhost systemd[1]: Reloading. Dec 2 03:13:06 localhost systemd-sysv-generator[63531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:06 localhost systemd-rc-local-generator[63526]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:06 localhost sshd[63537]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:13:06 localhost systemd[1]: Started nova_virtstoraged container. Dec 2 03:13:07 localhost python3[63554]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:07 localhost systemd[1]: Reloading. Dec 2 03:13:07 localhost systemd-rc-local-generator[63576]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:07 localhost systemd-sysv-generator[63582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:07 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:07 localhost systemd[1]: Started libcrun container. Dec 2 03:13:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:07 localhost podman[63594]: 2025-12-02 08:13:07.529067254 +0000 UTC m=+0.134559072 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container) Dec 2 03:13:07 localhost podman[63594]: 2025-12-02 08:13:07.539834285 +0000 UTC m=+0.145326083 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=rsyslog, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, release=1761123044) Dec 2 03:13:07 localhost podman[63594]: rsyslog Dec 2 03:13:07 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:07 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:13:07 localhost podman[63629]: 2025-12-02 08:13:07.72781721 +0000 UTC m=+0.055675071 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, release=1761123044, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 2 03:13:07 localhost podman[63629]: 2025-12-02 08:13:07.756763882 +0000 UTC m=+0.084621713 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=rsyslog, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z) Dec 2 03:13:07 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:07 localhost podman[63641]: 2025-12-02 08:13:07.843691934 +0000 UTC m=+0.055600779 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:07 localhost podman[63641]: rsyslog Dec 2 03:13:07 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:07 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Dec 2 03:13:07 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:07 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:08 localhost python3[63669]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:13:08 localhost systemd[1]: Started libcrun container. Dec 2 03:13:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:08 localhost podman[63670]: 2025-12-02 08:13:08.115651111 +0000 UTC m=+0.113799744 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:13:08 localhost podman[63670]: 2025-12-02 08:13:08.125403931 +0000 UTC m=+0.123552564 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true) Dec 2 03:13:08 localhost podman[63670]: rsyslog Dec 2 03:13:08 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:08 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:13:08 localhost podman[63692]: 2025-12-02 08:13:08.278586988 +0000 UTC m=+0.055026571 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=rsyslog, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Dec 2 03:13:08 localhost podman[63692]: 2025-12-02 08:13:08.298362378 +0000 UTC m=+0.074801941 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, release=1761123044, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:08 localhost podman[63706]: 2025-12-02 08:13:08.379743154 +0000 UTC m=+0.048402834 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, container_name=rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:08 localhost podman[63706]: rsyslog Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:08 localhost systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully. Dec 2 03:13:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Dec 2 03:13:08 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:08 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:08 localhost systemd[1]: Started libcrun container. Dec 2 03:13:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:08 localhost podman[63767]: 2025-12-02 08:13:08.636310283 +0000 UTC m=+0.109842266 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, release=1761123044, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, container_name=rsyslog, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:13:08 localhost podman[63767]: 2025-12-02 08:13:08.642563449 +0000 UTC m=+0.116095452 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z) Dec 2 03:13:08 localhost podman[63767]: rsyslog Dec 2 03:13:08 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:08 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:13:08 localhost podman[63792]: 2025-12-02 08:13:08.767872565 +0000 UTC m=+0.032759328 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1) Dec 2 03:13:08 localhost podman[63792]: 2025-12-02 08:13:08.790329364 +0000 UTC m=+0.055216117 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, container_name=rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:08 localhost podman[63821]: 2025-12-02 08:13:08.858877097 +0000 UTC m=+0.037675133 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-rsyslog, container_name=rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:08 localhost podman[63821]: rsyslog Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:08 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Dec 2 03:13:08 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:08 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:09 localhost systemd[1]: Started libcrun container. Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost podman[63858]: 2025-12-02 08:13:09.098135691 +0000 UTC m=+0.105618410 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 2 03:13:09 localhost podman[63858]: 2025-12-02 08:13:09.103365087 +0000 UTC m=+0.110847806 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-type=git, version=17.1.12, name=rhosp17/openstack-rsyslog, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z) Dec 2 03:13:09 localhost podman[63858]: rsyslog Dec 2 03:13:09 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:09 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:13:09 localhost podman[63895]: 2025-12-02 08:13:09.192700539 +0000 UTC m=+0.031112528 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rsyslog, architecture=x86_64, container_name=rsyslog, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4) Dec 2 03:13:09 localhost podman[63895]: 2025-12-02 08:13:09.212606713 +0000 UTC m=+0.051018642 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:09 localhost podman[63923]: 2025-12-02 08:13:09.270162919 +0000 UTC m=+0.030215092 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 2 03:13:09 localhost podman[63923]: rsyslog Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:09 localhost python3[63931]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005541914 step=3 update_config_hash_only=False Dec 2 03:13:09 localhost systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully. Dec 2 03:13:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Dec 2 03:13:09 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:09 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:09 localhost systemd[1]: Started libcrun container. Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost podman[63937]: 2025-12-02 08:13:09.583585452 +0000 UTC m=+0.095922310 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:09 localhost podman[63937]: 2025-12-02 08:13:09.591969122 +0000 UTC m=+0.104306010 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible) Dec 2 03:13:09 localhost podman[63937]: rsyslog Dec 2 03:13:09 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:09 localhost systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully. Dec 2 03:13:09 localhost podman[63957]: 2025-12-02 08:13:09.723856424 +0000 UTC m=+0.037450207 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044) Dec 2 03:13:09 localhost podman[63957]: 2025-12-02 08:13:09.745698225 +0000 UTC m=+0.059291968 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:49Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=) Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:09 localhost podman[63971]: 2025-12-02 08:13:09.80861594 +0000 UTC m=+0.039063025 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team) Dec 2 03:13:09 localhost podman[63971]: rsyslog Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Dec 2 03:13:09 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:09 localhost systemd[1]: Failed to start rsyslog container. Dec 2 03:13:10 localhost python3[63998]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:13:10 localhost systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully. Dec 2 03:13:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:10 localhost python3[64044]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:13:11 localhost podman[64091]: 2025-12-02 08:13:11.886766523 +0000 UTC m=+0.084547171 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, config_id=tripleo_step3, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com) Dec 2 03:13:11 localhost podman[64091]: 2025-12-02 08:13:11.901744191 +0000 UTC m=+0.099524769 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:13:11 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:13:13 localhost podman[64111]: 2025-12-02 08:13:13.221966468 +0000 UTC m=+0.229457051 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:13:13 localhost podman[64111]: 2025-12-02 08:13:13.273028067 +0000 UTC m=+0.280518610 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid) Dec 2 03:13:13 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:13:25 localhost systemd[1]: tmp-crun.dSxOgp.mount: Deactivated successfully. Dec 2 03:13:25 localhost podman[64130]: 2025-12-02 08:13:25.086940122 +0000 UTC m=+0.094560201 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64) Dec 2 03:13:25 localhost podman[64130]: 2025-12-02 08:13:25.29921931 +0000 UTC m=+0.306839299 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible) Dec 2 03:13:25 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:13:42 localhost podman[64159]: 2025-12-02 08:13:42.081240401 +0000 UTC m=+0.086691603 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, release=1761123044, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:42 localhost podman[64159]: 2025-12-02 08:13:42.090683698 +0000 UTC m=+0.096134920 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc.) Dec 2 03:13:42 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:13:44 localhost podman[64179]: 2025-12-02 08:13:44.05438291 +0000 UTC m=+0.062384826 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, tcib_managed=true) Dec 2 03:13:44 localhost podman[64179]: 2025-12-02 08:13:44.068183065 +0000 UTC m=+0.076185011 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:13:44 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:13:50 localhost sshd[64198]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:13:56 localhost podman[64200]: 2025-12-02 08:13:56.083634049 +0000 UTC m=+0.085091012 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:13:56 localhost podman[64200]: 2025-12-02 08:13:56.27983367 +0000 UTC m=+0.281290613 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:13:56 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:14:13 localhost systemd[1]: tmp-crun.oMwu9f.mount: Deactivated successfully. Dec 2 03:14:13 localhost podman[64290]: 2025-12-02 08:14:13.08342028 +0000 UTC m=+0.086752335 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:14:13 localhost podman[64290]: 2025-12-02 08:14:13.095855532 +0000 UTC m=+0.099187567 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:14:13 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:14:15 localhost podman[64323]: 2025-12-02 08:14:15.066898075 +0000 UTC m=+0.075850691 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:14:15 localhost podman[64323]: 2025-12-02 08:14:15.10388285 +0000 UTC m=+0.112835476 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-iscsid) Dec 2 03:14:15 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:14:25 localhost sshd[64342]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:14:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:14:27 localhost podman[64344]: 2025-12-02 08:14:27.127911117 +0000 UTC m=+0.086017092 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:14:27 localhost podman[64344]: 2025-12-02 08:14:27.324416388 +0000 UTC m=+0.282522353 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public) Dec 2 03:14:27 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:14:44 localhost podman[64374]: 2025-12-02 08:14:44.219691005 +0000 UTC m=+0.224393771 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:14:44 localhost podman[64374]: 2025-12-02 08:14:44.230205207 +0000 UTC m=+0.234907983 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:14:44 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:14:46 localhost systemd[1]: tmp-crun.ZjCkrA.mount: Deactivated successfully. Dec 2 03:14:46 localhost podman[64394]: 2025-12-02 08:14:46.086993994 +0000 UTC m=+0.089321102 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 03:14:46 localhost podman[64394]: 2025-12-02 08:14:46.125920646 +0000 UTC m=+0.128247734 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:14:46 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:14:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:14:58 localhost podman[64413]: 2025-12-02 08:14:58.094495357 +0000 UTC m=+0.092357037 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:14:58 localhost podman[64413]: 2025-12-02 08:14:58.321842855 +0000 UTC m=+0.319704535 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:14:58 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:14:59 localhost sshd[64443]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:15:02 localhost sshd[64447]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:15:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:15:15 localhost podman[64511]: 2025-12-02 08:15:15.089385126 +0000 UTC m=+0.092331916 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 2 03:15:15 localhost podman[64511]: 2025-12-02 08:15:15.126908694 +0000 UTC m=+0.129855454 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 2 03:15:15 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:15:17 localhost podman[64531]: 2025-12-02 08:15:17.071088594 +0000 UTC m=+0.079365411 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:15:17 localhost podman[64531]: 2025-12-02 08:15:17.084922824 +0000 UTC m=+0.093199621 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:15:17 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:15:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:15:29 localhost podman[64566]: 2025-12-02 08:15:29.079040389 +0000 UTC m=+0.076637406 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, distribution-scope=public) Dec 2 03:15:29 localhost podman[64566]: 2025-12-02 08:15:29.26085281 +0000 UTC m=+0.258449827 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, distribution-scope=public) Dec 2 03:15:29 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:15:44 localhost sshd[64595]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:15:45 localhost podman[64597]: 2025-12-02 08:15:45.75986054 +0000 UTC m=+0.081222319 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:15:45 localhost podman[64597]: 2025-12-02 08:15:45.77299269 +0000 UTC m=+0.094354499 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:15:45 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:15:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:15:48 localhost podman[64618]: 2025-12-02 08:15:48.048110483 +0000 UTC m=+0.060627528 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, container_name=iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044) Dec 2 03:15:48 localhost podman[64618]: 2025-12-02 08:15:48.060713295 +0000 UTC m=+0.073230270 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 03:15:48 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:15:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:16:00 localhost podman[64636]: 2025-12-02 08:16:00.079355545 +0000 UTC m=+0.084969236 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:16:00 localhost podman[64636]: 2025-12-02 08:16:00.244022782 +0000 UTC m=+0.249636473 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Dec 2 03:16:00 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:16:16 localhost podman[64665]: 2025-12-02 08:16:16.068445671 +0000 UTC m=+0.076764151 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12) Dec 2 03:16:16 localhost podman[64665]: 2025-12-02 08:16:16.101776798 +0000 UTC m=+0.110095258 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:16:16 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:16:16 localhost sshd[64686]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:16:18 localhost podman[64688]: 2025-12-02 08:16:18.18637626 +0000 UTC m=+0.088887458 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible) Dec 2 03:16:18 localhost podman[64688]: 2025-12-02 08:16:18.194149912 +0000 UTC m=+0.096661110 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, release=1761123044, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:16:18 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:16:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:16:31 localhost systemd[1]: tmp-crun.l3biKo.mount: Deactivated successfully. Dec 2 03:16:31 localhost podman[64831]: 2025-12-02 08:16:31.127442778 +0000 UTC m=+0.080949752 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:16:31 localhost podman[64831]: 2025-12-02 08:16:31.344436924 +0000 UTC m=+0.297943938 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:16:31 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:16:47 localhost systemd[1]: tmp-crun.VIGrIx.mount: Deactivated successfully. Dec 2 03:16:47 localhost podman[64862]: 2025-12-02 08:16:47.083110963 +0000 UTC m=+0.089500278 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:16:47 localhost podman[64862]: 2025-12-02 08:16:47.12092192 +0000 UTC m=+0.127311245 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 03:16:47 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:16:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:16:49 localhost podman[64883]: 2025-12-02 08:16:49.072927893 +0000 UTC m=+0.077134703 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:16:49 localhost podman[64883]: 2025-12-02 08:16:49.111875816 +0000 UTC m=+0.116082656 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:16:49 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:17:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:17:02 localhost podman[64903]: 2025-12-02 08:17:02.050429274 +0000 UTC m=+0.059322748 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com) Dec 2 03:17:02 localhost podman[64903]: 2025-12-02 08:17:02.249358277 +0000 UTC m=+0.258251791 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 03:17:02 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:17:04 localhost sshd[64934]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:17:06 localhost sshd[64936]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4399 writes, 20K keys, 4399 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4399 writes, 504 syncs, 8.73 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 190 writes, 549 keys, 190 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 190 writes, 93 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5262 writes, 23K keys, 5262 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5262 writes, 560 syncs, 9.40 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 169 writes, 444 keys, 169 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 169 writes, 83 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:17:18 localhost systemd[1]: tmp-crun.CYdHEr.mount: Deactivated successfully. Dec 2 03:17:18 localhost podman[64985]: 2025-12-02 08:17:18.117837506 +0000 UTC m=+0.114422723 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:17:18 localhost podman[64985]: 2025-12-02 08:17:18.156867051 +0000 UTC m=+0.153452238 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:17:18 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:17:18 localhost python3[64986]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:18 localhost python3[65053]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663437.8122132-107022-101980870140753/source _original_basename=tmpw_u8sle8 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:17:19 localhost systemd[1]: tmp-crun.wxevjU.mount: Deactivated successfully. Dec 2 03:17:19 localhost podman[65115]: 2025-12-02 08:17:19.929910903 +0000 UTC m=+0.089773886 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, container_name=iscsid) Dec 2 03:17:19 localhost podman[65115]: 2025-12-02 08:17:19.963172979 +0000 UTC m=+0.123035932 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible) Dec 2 03:17:19 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:17:19 localhost python3[65116]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:20 localhost python3[65177]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663439.6693919-107120-208301867448792/source _original_basename=tmpsmgcjw7y follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:20 localhost python3[65239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:21 localhost python3[65282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663440.642849-107177-143401311914366/source _original_basename=tmpp5pyajf0 follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:22 localhost python3[65344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:22 localhost python3[65387]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663441.7007818-107231-108339490604747/source _original_basename=tmptk2a23br follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:23 localhost python3[65417]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 03:17:23 localhost systemd[1]: Reloading. Dec 2 03:17:23 localhost systemd-sysv-generator[65444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:23 localhost systemd-rc-local-generator[65439]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:23 localhost systemd[1]: Reloading. Dec 2 03:17:23 localhost systemd-rc-local-generator[65479]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:23 localhost systemd-sysv-generator[65485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost python3[65507]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:17:24 localhost systemd[1]: Reloading. Dec 2 03:17:24 localhost systemd-rc-local-generator[65530]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:24 localhost systemd-sysv-generator[65534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost systemd[1]: Reloading. Dec 2 03:17:24 localhost systemd-rc-local-generator[65572]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:24 localhost systemd-sysv-generator[65575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Dec 2 03:17:25 localhost python3[65598]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:17:25 localhost systemd[1]: Reloading. Dec 2 03:17:25 localhost systemd-rc-local-generator[65653]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:25 localhost systemd-sysv-generator[65657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:26 localhost python3[65761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:26 localhost python3[65822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663445.7601256-107414-24241160205576/source _original_basename=tmp9yd68cck follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:26 localhost python3[65883]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:17:26 localhost systemd[1]: Reloading. Dec 2 03:17:27 localhost systemd-sysv-generator[65924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:27 localhost systemd-rc-local-generator[65920]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:27 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Dec 2 03:17:27 localhost podman[65979]: Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.490835337 +0000 UTC m=+0.076829794 container create 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Dec 2 03:17:27 localhost systemd[1]: Started libpod-conmon-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope. Dec 2 03:17:27 localhost systemd[1]: Started libcrun container. Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.459500501 +0000 UTC m=+0.045494978 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.572727426 +0000 UTC m=+0.158721883 container init 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, vcs-type=git, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.585048559 +0000 UTC m=+0.171043026 container start 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.585339189 +0000 UTC m=+0.171333636 container attach 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:17:27 localhost systemd[1]: libpod-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope: Deactivated successfully. Dec 2 03:17:27 localhost crazy_moser[66008]: 167 167 Dec 2 03:17:27 localhost podman[65979]: 2025-12-02 08:17:27.589712635 +0000 UTC m=+0.175707072 container died 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 2 03:17:27 localhost podman[66015]: 2025-12-02 08:17:27.678920462 +0000 UTC m=+0.072958302 container remove 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 03:17:27 localhost systemd[1]: libpod-conmon-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope: Deactivated successfully. Dec 2 03:17:27 localhost python3[66013]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:27 localhost podman[66037]: Dec 2 03:17:27 localhost podman[66037]: 2025-12-02 08:17:27.870530337 +0000 UTC m=+0.076567954 container create 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 03:17:27 localhost systemd[1]: Started libpod-conmon-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope. Dec 2 03:17:27 localhost systemd[1]: Started libcrun container. Dec 2 03:17:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:27 localhost podman[66037]: 2025-12-02 08:17:27.848058068 +0000 UTC m=+0.054095655 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:17:27 localhost podman[66037]: 2025-12-02 08:17:27.948387022 +0000 UTC m=+0.154424639 container init 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 03:17:27 localhost podman[66037]: 2025-12-02 08:17:27.957924879 +0000 UTC m=+0.163962496 container start 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Dec 2 03:17:27 localhost podman[66037]: 2025-12-02 08:17:27.958138536 +0000 UTC m=+0.164176153 container attach 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, name=rhceph, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, io.buildah.version=1.41.4) Dec 2 03:17:28 localhost systemd[1]: var-lib-containers-storage-overlay-0e0a6f01ad14f243016ec121b93ed9c9fa375a910e90982e158037cdfdbcf2a3-merged.mount: Deactivated successfully. Dec 2 03:17:29 localhost focused_margulis[66054]: [ Dec 2 03:17:29 localhost focused_margulis[66054]: { Dec 2 03:17:29 localhost focused_margulis[66054]: "available": false, Dec 2 03:17:29 localhost focused_margulis[66054]: "ceph_device": false, Dec 2 03:17:29 localhost focused_margulis[66054]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 03:17:29 localhost focused_margulis[66054]: "lsm_data": {}, Dec 2 03:17:29 localhost focused_margulis[66054]: "lvs": [], Dec 2 03:17:29 localhost focused_margulis[66054]: "path": "/dev/sr0", Dec 2 03:17:29 localhost focused_margulis[66054]: "rejected_reasons": [ Dec 2 03:17:29 localhost focused_margulis[66054]: "Insufficient space (<5GB)", Dec 2 03:17:29 localhost focused_margulis[66054]: "Has a FileSystem" Dec 2 03:17:29 localhost focused_margulis[66054]: ], Dec 2 03:17:29 localhost focused_margulis[66054]: "sys_api": { Dec 2 03:17:29 localhost focused_margulis[66054]: "actuators": null, Dec 2 03:17:29 localhost focused_margulis[66054]: "device_nodes": "sr0", Dec 2 03:17:29 localhost focused_margulis[66054]: "human_readable_size": "482.00 KB", Dec 2 03:17:29 localhost focused_margulis[66054]: "id_bus": "ata", Dec 2 03:17:29 localhost focused_margulis[66054]: "model": "QEMU DVD-ROM", Dec 2 03:17:29 localhost focused_margulis[66054]: "nr_requests": "2", Dec 2 03:17:29 localhost focused_margulis[66054]: "partitions": {}, Dec 2 03:17:29 localhost focused_margulis[66054]: "path": "/dev/sr0", Dec 2 03:17:29 localhost focused_margulis[66054]: "removable": "1", Dec 2 03:17:29 localhost focused_margulis[66054]: "rev": "2.5+", Dec 2 03:17:29 localhost focused_margulis[66054]: "ro": "0", Dec 2 03:17:29 localhost focused_margulis[66054]: "rotational": "1", Dec 2 03:17:29 localhost focused_margulis[66054]: "sas_address": "", Dec 2 03:17:29 localhost focused_margulis[66054]: "sas_device_handle": "", Dec 2 03:17:29 localhost focused_margulis[66054]: "scheduler_mode": "mq-deadline", Dec 2 03:17:29 localhost focused_margulis[66054]: "sectors": 0, Dec 2 03:17:29 localhost focused_margulis[66054]: "sectorsize": "2048", Dec 2 03:17:29 localhost focused_margulis[66054]: "size": 493568.0, Dec 2 03:17:29 localhost focused_margulis[66054]: "support_discard": "0", Dec 2 03:17:29 localhost focused_margulis[66054]: "type": "disk", Dec 2 03:17:29 localhost focused_margulis[66054]: "vendor": "QEMU" Dec 2 03:17:29 localhost focused_margulis[66054]: } Dec 2 03:17:29 localhost focused_margulis[66054]: } Dec 2 03:17:29 localhost focused_margulis[66054]: ] Dec 2 03:17:29 localhost systemd[1]: libpod-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Deactivated successfully. Dec 2 03:17:29 localhost systemd[1]: libpod-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Consumed 1.058s CPU time. Dec 2 03:17:29 localhost podman[66037]: 2025-12-02 08:17:29.034112895 +0000 UTC m=+1.240150572 container died 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 03:17:29 localhost systemd[1]: var-lib-containers-storage-overlay-a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b-merged.mount: Deactivated successfully. Dec 2 03:17:29 localhost podman[67735]: 2025-12-02 08:17:29.131441295 +0000 UTC m=+0.086240816 container remove 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218) Dec 2 03:17:29 localhost systemd[1]: libpod-conmon-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Deactivated successfully. Dec 2 03:17:29 localhost ansible-async_wrapper.py[67806]: Invoked with 736775549377 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663448.9636645-107510-208696551997559/AnsiballZ_command.py _ Dec 2 03:17:29 localhost ansible-async_wrapper.py[67809]: Starting module and watcher Dec 2 03:17:29 localhost ansible-async_wrapper.py[67809]: Start watching 67810 (3600) Dec 2 03:17:29 localhost ansible-async_wrapper.py[67810]: Start module (67810) Dec 2 03:17:29 localhost ansible-async_wrapper.py[67806]: Return async_wrapper task started. Dec 2 03:17:29 localhost python3[67830]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:32 localhost sshd[67955]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:17:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:17:33 localhost podman[67957]: 2025-12-02 08:17:33.06749258 +0000 UTC m=+0.069550246 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:17:33 localhost puppet-user[67824]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:17:33 localhost puppet-user[67824]: (file: /etc/puppet/hiera.yaml) Dec 2 03:17:33 localhost puppet-user[67824]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:17:33 localhost puppet-user[67824]: (file & line not available) Dec 2 03:17:33 localhost podman[67957]: 2025-12-02 08:17:33.230905658 +0000 UTC m=+0.232963354 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 2 03:17:33 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:17:33 localhost puppet-user[67824]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:17:33 localhost puppet-user[67824]: (file & line not available) Dec 2 03:17:33 localhost puppet-user[67824]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:33 localhost puppet-user[67824]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:33 localhost puppet-user[67824]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:33 localhost puppet-user[67824]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:33 localhost puppet-user[67824]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:33 localhost puppet-user[67824]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:33 localhost puppet-user[67824]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:33 localhost puppet-user[67824]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:17:33 localhost puppet-user[67824]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.26 seconds Dec 2 03:17:34 localhost ansible-async_wrapper.py[67809]: 67810 still running (3600) Dec 2 03:17:39 localhost ansible-async_wrapper.py[67809]: 67810 still running (3595) Dec 2 03:17:40 localhost python3[68076]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:41 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:17:41 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 03:17:41 localhost systemd[1]: Reloading. Dec 2 03:17:41 localhost systemd-rc-local-generator[68157]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:41 localhost systemd-sysv-generator[68163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:41 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 03:17:42 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 03:17:42 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 03:17:42 localhost systemd[1]: man-db-cache-update.service: Consumed 1.180s CPU time. Dec 2 03:17:42 localhost systemd[1]: run-r3f68b58bb0b54722b6ce99d9b8400895.service: Deactivated successfully. Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}79bc8750f7571a711937563794c62afe468b6afdec06ce72ff411863e05e4aa2' Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Dec 2 03:17:42 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Dec 2 03:17:44 localhost ansible-async_wrapper.py[67809]: 67810 still running (3590) Dec 2 03:17:47 localhost puppet-user[67824]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Dec 2 03:17:47 localhost systemd[1]: Reloading. Dec 2 03:17:48 localhost systemd-rc-local-generator[69203]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:48 localhost systemd-sysv-generator[69208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:17:48 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Dec 2 03:17:48 localhost snmpd[69217]: Can't find directory of RPM packages Dec 2 03:17:48 localhost systemd[1]: tmp-crun.pCSP8q.mount: Deactivated successfully. Dec 2 03:17:48 localhost snmpd[69217]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Dec 2 03:17:48 localhost podman[69216]: 2025-12-02 08:17:48.386083962 +0000 UTC m=+0.080423505 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git) Dec 2 03:17:48 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Dec 2 03:17:48 localhost podman[69216]: 2025-12-02 08:17:48.421312838 +0000 UTC m=+0.115652401 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public) Dec 2 03:17:48 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:17:48 localhost systemd[1]: Reloading. Dec 2 03:17:48 localhost systemd-rc-local-generator[69261]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:48 localhost systemd-sysv-generator[69264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:48 localhost systemd[1]: Reloading. Dec 2 03:17:48 localhost systemd-rc-local-generator[69298]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:48 localhost systemd-sysv-generator[69304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:49 localhost puppet-user[67824]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Dec 2 03:17:49 localhost puppet-user[67824]: Notice: Applied catalog in 15.51 seconds Dec 2 03:17:49 localhost puppet-user[67824]: Application: Dec 2 03:17:49 localhost puppet-user[67824]: Initial environment: production Dec 2 03:17:49 localhost puppet-user[67824]: Converged environment: production Dec 2 03:17:49 localhost puppet-user[67824]: Run mode: user Dec 2 03:17:49 localhost puppet-user[67824]: Changes: Dec 2 03:17:49 localhost puppet-user[67824]: Total: 8 Dec 2 03:17:49 localhost puppet-user[67824]: Events: Dec 2 03:17:49 localhost puppet-user[67824]: Success: 8 Dec 2 03:17:49 localhost puppet-user[67824]: Total: 8 Dec 2 03:17:49 localhost puppet-user[67824]: Resources: Dec 2 03:17:49 localhost puppet-user[67824]: Restarted: 1 Dec 2 03:17:49 localhost puppet-user[67824]: Changed: 8 Dec 2 03:17:49 localhost puppet-user[67824]: Out of sync: 8 Dec 2 03:17:49 localhost puppet-user[67824]: Total: 19 Dec 2 03:17:49 localhost puppet-user[67824]: Time: Dec 2 03:17:49 localhost puppet-user[67824]: Filebucket: 0.00 Dec 2 03:17:49 localhost puppet-user[67824]: Schedule: 0.00 Dec 2 03:17:49 localhost puppet-user[67824]: Augeas: 0.01 Dec 2 03:17:49 localhost puppet-user[67824]: File: 0.07 Dec 2 03:17:49 localhost puppet-user[67824]: Config retrieval: 0.32 Dec 2 03:17:49 localhost puppet-user[67824]: Service: 1.16 Dec 2 03:17:49 localhost puppet-user[67824]: Transaction evaluation: 15.50 Dec 2 03:17:49 localhost puppet-user[67824]: Catalog application: 15.51 Dec 2 03:17:49 localhost puppet-user[67824]: Last run: 1764663469 Dec 2 03:17:49 localhost puppet-user[67824]: Exec: 5.06 Dec 2 03:17:49 localhost puppet-user[67824]: Package: 9.01 Dec 2 03:17:49 localhost puppet-user[67824]: Total: 15.52 Dec 2 03:17:49 localhost puppet-user[67824]: Version: Dec 2 03:17:49 localhost puppet-user[67824]: Config: 1764663453 Dec 2 03:17:49 localhost puppet-user[67824]: Puppet: 7.10.0 Dec 2 03:17:49 localhost ansible-async_wrapper.py[67810]: Module complete (67810) Dec 2 03:17:49 localhost ansible-async_wrapper.py[67809]: Done in kid B. Dec 2 03:17:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:17:50 localhost systemd[1]: tmp-crun.gOlKtO.mount: Deactivated successfully. Dec 2 03:17:50 localhost podman[69325]: 2025-12-02 08:17:50.409054275 +0000 UTC m=+0.091959045 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 03:17:50 localhost podman[69325]: 2025-12-02 08:17:50.447864863 +0000 UTC m=+0.130769633 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 2 03:17:50 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:17:50 localhost python3[69326]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:51 localhost python3[69358]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:17:51 localhost python3[69374]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:52 localhost python3[69424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:52 localhost python3[69442]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpm2w18f56 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:17:52 localhost python3[69472]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:54 localhost python3[69575]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:17:54 localhost python3[69594]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:55 localhost python3[69626]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:56 localhost python3[69676]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:56 localhost python3[69694]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:57 localhost python3[69756]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:57 localhost python3[69774]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:58 localhost python3[69836]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:58 localhost python3[69854]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:58 localhost python3[69916]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:59 localhost python3[69934]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:59 localhost python3[69964]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:17:59 localhost systemd[1]: Reloading. Dec 2 03:17:59 localhost systemd-rc-local-generator[69988]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:59 localhost systemd-sysv-generator[69991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:00 localhost python3[70050]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:18:00 localhost python3[70068]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:01 localhost python3[70130]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:18:01 localhost python3[70148]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:02 localhost python3[70178]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:02 localhost systemd[1]: Reloading. Dec 2 03:18:02 localhost systemd-rc-local-generator[70204]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:02 localhost systemd-sysv-generator[70209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:02 localhost systemd[1]: Starting Create netns directory... Dec 2 03:18:02 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:18:02 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:18:02 localhost systemd[1]: Finished Create netns directory. Dec 2 03:18:03 localhost python3[70235]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:18:03 localhost podman[70252]: 2025-12-02 08:18:03.661469086 +0000 UTC m=+0.099340764 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:18:03 localhost podman[70252]: 2025-12-02 08:18:03.881539748 +0000 UTC m=+0.319411496 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:18:03 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:18:05 localhost python3[70321]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:18:05 localhost podman[70466]: 2025-12-02 08:18:05.988067692 +0000 UTC m=+0.080236319 container create a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope. Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:06.029599656 +0000 UTC m=+0.085064790 container create 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.12, build-date=2025-11-19T00:35:22Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:18:06 localhost podman[70466]: 2025-12-02 08:18:05.943970679 +0000 UTC m=+0.036139336 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.057206225 +0000 UTC m=+0.092306875 container create e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, container_name=configure_cms_options, version=17.1.12) Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope. Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope. Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:05.982025144 +0000 UTC m=+0.037490268 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:18:06 localhost podman[70466]: 2025-12-02 08:18:06.096716385 +0000 UTC m=+0.188885002 container init a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 2 03:18:06 localhost podman[70493]: 2025-12-02 08:18:06.097216721 +0000 UTC m=+0.147690980 container create 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.020061378 +0000 UTC m=+0.055162078 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope. Dec 2 03:18:06 localhost podman[70466]: 2025-12-02 08:18:06.130217958 +0000 UTC m=+0.222386595 container start a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=885e9e62222ac12bce952717b40ccfc4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0089ea487a0d5fd991d7e6cecf5db6fae8c1b61a42816d2acbe202fbd50d575/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost podman[70493]: 2025-12-02 08:18:06.04677323 +0000 UTC m=+0.097247499 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:06.151070467 +0000 UTC m=+0.206535571 container init 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, container_name=nova_libvirt_init_secret, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:06.158435537 +0000 UTC m=+0.213900641 container start 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, name=rhosp17/openstack-nova-libvirt, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1) Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:06.158635503 +0000 UTC m=+0.214100607 container attach 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_libvirt_init_secret, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team) Dec 2 03:18:06 localhost podman[70539]: 2025-12-02 08:18:06.060415795 +0000 UTC m=+0.045197818 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:18:06 localhost podman[70493]: 2025-12-02 08:18:06.171508123 +0000 UTC m=+0.221982412 container init 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:18:06 localhost podman[70573]: 2025-12-02 08:18:06.195661786 +0000 UTC m=+0.062764495 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:18:06 localhost podman[70539]: 2025-12-02 08:18:06.210571599 +0000 UTC m=+0.195353592 container create 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope. Dec 2 03:18:06 localhost podman[70493]: 2025-12-02 08:18:06.253695743 +0000 UTC m=+0.304169992 container start 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=885e9e62222ac12bce952717b40ccfc4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:18:06 localhost podman[70613]: 2025-12-02 08:18:06.29823758 +0000 UTC m=+0.096049332 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:11:48Z, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:18:06 localhost podman[70573]: 2025-12-02 08:18:06.298969272 +0000 UTC m=+0.166071991 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:06 localhost podman[70539]: 2025-12-02 08:18:06.299041444 +0000 UTC m=+0.283823427 container init 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:18:06 localhost podman[70573]: unhealthy Dec 2 03:18:06 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:06 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.309566762 +0000 UTC m=+0.344667402 container init e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-11-18T23:34:05Z, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp17/openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64) Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.321432011 +0000 UTC m=+0.356532651 container start e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.321729451 +0000 UTC m=+0.356830111 container attach e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=configure_cms_options, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:18:06 localhost systemd[1]: libpod-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope: Deactivated successfully. Dec 2 03:18:06 localhost podman[70539]: 2025-12-02 08:18:06.328847143 +0000 UTC m=+0.313629136 container start 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 03:18:06 localhost podman[70492]: 2025-12-02 08:18:06.333302471 +0000 UTC m=+0.388767565 container died 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_libvirt_init_secret, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}) Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:18:06 localhost podman[70678]: 2025-12-02 08:18:06.411466174 +0000 UTC m=+0.073452427 container cleanup 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4) Dec 2 03:18:06 localhost ovs-vsctl[70710]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Dec 2 03:18:06 localhost systemd[1]: libpod-conmon-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope: Deactivated successfully. Dec 2 03:18:06 localhost systemd[1]: libpod-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope: Deactivated successfully. Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Dec 2 03:18:06 localhost podman[70504]: 2025-12-02 08:18:06.421359153 +0000 UTC m=+0.456459793 container died e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:06 localhost podman[70613]: 2025-12-02 08:18:06.437101762 +0000 UTC m=+0.234913494 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:18:06 localhost podman[70613]: unhealthy Dec 2 03:18:06 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:06 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'. Dec 2 03:18:06 localhost podman[70676]: 2025-12-02 08:18:06.502609103 +0000 UTC m=+0.168185488 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, release=1761123044, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:18:06 localhost podman[70676]: 2025-12-02 08:18:06.511040005 +0000 UTC m=+0.176616420 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64) Dec 2 03:18:06 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:18:06 localhost podman[70714]: 2025-12-02 08:18:06.55457504 +0000 UTC m=+0.115851888 container cleanup e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, container_name=configure_cms_options, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:18:06 localhost systemd[1]: libpod-conmon-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope: Deactivated successfully. Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Dec 2 03:18:06 localhost podman[70825]: 2025-12-02 08:18:06.715556823 +0000 UTC m=+0.078833916 container create f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, release=1761123044) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope. Dec 2 03:18:06 localhost podman[70825]: 2025-12-02 08:18:06.678152507 +0000 UTC m=+0.041429600 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:18:06 localhost podman[70825]: 2025-12-02 08:18:06.83498407 +0000 UTC m=+0.198261163 container init f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:18:06 localhost podman[70825]: 2025-12-02 08:18:06.892326176 +0000 UTC m=+0.255603299 container start f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 2 03:18:06 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:18:06 localhost podman[70871]: 2025-12-02 08:18:06.905898329 +0000 UTC m=+0.146226325 container create 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.12) Dec 2 03:18:06 localhost podman[70871]: 2025-12-02 08:18:06.846322673 +0000 UTC m=+0.086650729 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope. Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost podman[70871]: 2025-12-02 08:18:06.983660039 +0000 UTC m=+0.223987995 container init 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=setup_ovs_manager, version=17.1.12, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git) Dec 2 03:18:07 localhost podman[70871]: 2025-12-02 08:18:06.99651592 +0000 UTC m=+0.236843896 container start 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-11-19T00:14:25Z, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:18:07 localhost systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully. Dec 2 03:18:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69-userdata-shm.mount: Deactivated successfully. Dec 2 03:18:07 localhost podman[70871]: 2025-12-02 08:18:07.004495558 +0000 UTC m=+0.244823614 container attach 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, url=https://www.redhat.com, container_name=setup_ovs_manager, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 2 03:18:07 localhost systemd[1]: tmp-crun.QX3SdV.mount: Deactivated successfully. Dec 2 03:18:07 localhost podman[70901]: 2025-12-02 08:18:07.012445996 +0000 UTC m=+0.111656248 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:18:07 localhost podman[70901]: 2025-12-02 08:18:07.362441613 +0000 UTC m=+0.461651895 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Dec 2 03:18:07 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:18:07 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Dec 2 03:18:09 localhost ovs-vsctl[71082]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Dec 2 03:18:10 localhost systemd[1]: libpod-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Deactivated successfully. Dec 2 03:18:10 localhost systemd[1]: libpod-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Consumed 2.996s CPU time. Dec 2 03:18:10 localhost podman[70871]: 2025-12-02 08:18:10.106668251 +0000 UTC m=+3.346996257 container died 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:18:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b-userdata-shm.mount: Deactivated successfully. Dec 2 03:18:10 localhost systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully. Dec 2 03:18:10 localhost podman[71083]: 2025-12-02 08:18:10.219923748 +0000 UTC m=+0.101022196 container cleanup 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:18:10 localhost systemd[1]: libpod-conmon-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Deactivated successfully. Dec 2 03:18:10 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Dec 2 03:18:10 localhost podman[71194]: 2025-12-02 08:18:10.712691369 +0000 UTC m=+0.100644444 container create 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64) Dec 2 03:18:10 localhost podman[71200]: 2025-12-02 08:18:10.72814618 +0000 UTC m=+0.095576376 container create b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:18:10 localhost podman[71194]: 2025-12-02 08:18:10.659316367 +0000 UTC m=+0.047269502 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:10 localhost podman[71200]: 2025-12-02 08:18:10.673588702 +0000 UTC m=+0.041018918 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:10 localhost systemd[1]: Started libpod-conmon-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope. Dec 2 03:18:10 localhost systemd[1]: Started libcrun container. Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost systemd[1]: Started libpod-conmon-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope. Dec 2 03:18:10 localhost systemd[1]: Started libcrun container. Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:18:10 localhost podman[71200]: 2025-12-02 08:18:10.849009193 +0000 UTC m=+0.216439369 container init b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:18:10 localhost podman[71194]: 2025-12-02 08:18:10.873997861 +0000 UTC m=+0.261950956 container init 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:18:10 localhost podman[71200]: 2025-12-02 08:18:10.895423919 +0000 UTC m=+0.262854125 container start b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:18:10 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:10 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:18:10 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:18:10 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:18:10 localhost podman[71194]: 2025-12-02 08:18:10.950883166 +0000 UTC m=+0.338836201 container start 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 2 03:18:10 localhost python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6b6de39672ef4d892f2e8f81f38c430b --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:10 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:18:10 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:18:11 localhost podman[71235]: 2025-12-02 08:18:11.015209117 +0000 UTC m=+0.105806924 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Dec 2 03:18:11 localhost podman[71235]: 2025-12-02 08:18:11.05318302 +0000 UTC m=+0.143780837 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:18:11 localhost podman[71235]: unhealthy Dec 2 03:18:11 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:11 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:18:11 localhost podman[71253]: 2025-12-02 08:18:11.094952161 +0000 UTC m=+0.135944014 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:18:11 localhost podman[71253]: 2025-12-02 08:18:11.106708846 +0000 UTC m=+0.147700699 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:18:11 localhost podman[71253]: unhealthy Dec 2 03:18:11 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:11 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:18:11 localhost systemd[71261]: Queued start job for default target Main User Target. Dec 2 03:18:11 localhost systemd[71261]: Created slice User Application Slice. Dec 2 03:18:11 localhost systemd[71261]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:18:11 localhost systemd[71261]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:18:11 localhost systemd[71261]: Reached target Paths. Dec 2 03:18:11 localhost systemd[71261]: Reached target Timers. Dec 2 03:18:11 localhost systemd[71261]: Starting D-Bus User Message Bus Socket... Dec 2 03:18:11 localhost systemd[71261]: Starting Create User's Volatile Files and Directories... Dec 2 03:18:11 localhost systemd[71261]: Finished Create User's Volatile Files and Directories. Dec 2 03:18:11 localhost systemd[71261]: Listening on D-Bus User Message Bus Socket. Dec 2 03:18:11 localhost systemd[71261]: Reached target Sockets. Dec 2 03:18:11 localhost systemd[71261]: Reached target Basic System. Dec 2 03:18:11 localhost systemd[71261]: Reached target Main User Target. Dec 2 03:18:11 localhost systemd[71261]: Startup finished in 146ms. Dec 2 03:18:11 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:18:11 localhost systemd[1]: Started Session c9 of User root. Dec 2 03:18:11 localhost systemd[1]: session-c9.scope: Deactivated successfully. Dec 2 03:18:11 localhost kernel: device br-int entered promiscuous mode Dec 2 03:18:11 localhost NetworkManager[5967]: [1764663491.2866] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Dec 2 03:18:11 localhost systemd-udevd[71346]: Network interface NamePolicy= disabled on kernel command line. Dec 2 03:18:11 localhost systemd-udevd[71349]: Network interface NamePolicy= disabled on kernel command line. Dec 2 03:18:11 localhost NetworkManager[5967]: [1764663491.3276] device (genev_sys_6081): carrier: link connected Dec 2 03:18:11 localhost NetworkManager[5967]: [1764663491.3280] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Dec 2 03:18:11 localhost kernel: device genev_sys_6081 entered promiscuous mode Dec 2 03:18:11 localhost python3[71368]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:11 localhost python3[71384]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71400]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71416]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71435]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:13 localhost python3[71452]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:13 localhost python3[71468]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71486]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71504]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:14 localhost python3[71520]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:14 localhost python3[71536]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:14 localhost python3[71552]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:15 localhost python3[71613]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:15 localhost python3[71642]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:16 localhost python3[71671]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:16 localhost python3[71700]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:17 localhost python3[71729]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:17 localhost python3[71758]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:18 localhost python3[71774]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:18:18 localhost systemd[1]: Reloading. Dec 2 03:18:18 localhost systemd-sysv-generator[71800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:18 localhost systemd-rc-local-generator[71795]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:18:18 localhost systemd[1]: tmp-crun.6tA5F9.mount: Deactivated successfully. Dec 2 03:18:18 localhost podman[71811]: 2025-12-02 08:18:18.615541837 +0000 UTC m=+0.071314392 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:18:18 localhost podman[71811]: 2025-12-02 08:18:18.624847067 +0000 UTC m=+0.080619612 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:18 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:18:19 localhost python3[71846]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:19 localhost systemd[1]: Reloading. Dec 2 03:18:19 localhost systemd-rc-local-generator[71874]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:19 localhost systemd-sysv-generator[71880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:19 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 03:18:20 localhost tripleo-start-podman-container[71887]: Creating additional drop-in dependency for "ceilometer_agent_compute" (814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae) Dec 2 03:18:20 localhost systemd[1]: Reloading. Dec 2 03:18:20 localhost systemd-rc-local-generator[71942]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:20 localhost systemd-sysv-generator[71945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:20 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 03:18:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:18:20 localhost systemd[1]: tmp-crun.pcLGRx.mount: Deactivated successfully. Dec 2 03:18:20 localhost podman[71971]: 2025-12-02 08:18:20.865527378 +0000 UTC m=+0.091488629 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:18:20 localhost podman[71971]: 2025-12-02 08:18:20.878909004 +0000 UTC m=+0.104870275 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=) Dec 2 03:18:20 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:18:21 localhost python3[71972]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:21 localhost systemd[1]: Reloading. Dec 2 03:18:21 localhost systemd-rc-local-generator[72019]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:21 localhost systemd-sysv-generator[72023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:21 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:18:21 localhost systemd[71261]: Activating special unit Exit the Session... Dec 2 03:18:21 localhost systemd[71261]: Stopped target Main User Target. Dec 2 03:18:21 localhost systemd[71261]: Stopped target Basic System. Dec 2 03:18:21 localhost systemd[71261]: Stopped target Paths. Dec 2 03:18:21 localhost systemd[71261]: Stopped target Sockets. Dec 2 03:18:21 localhost systemd[71261]: Stopped target Timers. Dec 2 03:18:21 localhost systemd[71261]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:18:21 localhost systemd[71261]: Closed D-Bus User Message Bus Socket. Dec 2 03:18:21 localhost systemd[71261]: Stopped Create User's Volatile Files and Directories. Dec 2 03:18:21 localhost systemd[71261]: Removed slice User Application Slice. Dec 2 03:18:21 localhost systemd[71261]: Reached target Shutdown. Dec 2 03:18:21 localhost systemd[71261]: Finished Exit the Session. Dec 2 03:18:21 localhost systemd[71261]: Reached target Exit the Session. Dec 2 03:18:21 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Dec 2 03:18:21 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:18:21 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:18:21 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:18:21 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:18:21 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:18:21 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:18:21 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:18:21 localhost systemd[1]: Started ceilometer_agent_ipmi container. Dec 2 03:18:22 localhost python3[72057]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:22 localhost systemd[1]: Reloading. Dec 2 03:18:22 localhost systemd-sysv-generator[72083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:22 localhost systemd-rc-local-generator[72079]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:22 localhost systemd[1]: Starting logrotate_crond container... Dec 2 03:18:22 localhost systemd[1]: Started logrotate_crond container. Dec 2 03:18:23 localhost python3[72123]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:24 localhost sshd[72125]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:18:24 localhost systemd[1]: Reloading. Dec 2 03:18:24 localhost systemd-sysv-generator[72155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:24 localhost systemd-rc-local-generator[72149]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:24 localhost systemd[1]: Starting nova_migration_target container... Dec 2 03:18:24 localhost systemd[1]: Started nova_migration_target container. Dec 2 03:18:25 localhost python3[72193]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:25 localhost systemd[1]: Reloading. Dec 2 03:18:25 localhost systemd-rc-local-generator[72219]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:25 localhost systemd-sysv-generator[72224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:25 localhost systemd[1]: Starting ovn_controller container... Dec 2 03:18:26 localhost tripleo-start-podman-container[72233]: Creating additional drop-in dependency for "ovn_controller" (b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d) Dec 2 03:18:26 localhost systemd[1]: Reloading. Dec 2 03:18:26 localhost systemd-sysv-generator[72293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:26 localhost systemd-rc-local-generator[72288]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:26 localhost systemd[1]: Started ovn_controller container. Dec 2 03:18:26 localhost python3[72317]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:28 localhost systemd[1]: Reloading. Dec 2 03:18:28 localhost systemd-rc-local-generator[72342]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:28 localhost systemd-sysv-generator[72347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:28 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 2 03:18:28 localhost systemd[1]: Started ovn_metadata_agent container. Dec 2 03:18:29 localhost python3[72399]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:30 localhost python3[72551]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005541914 step=4 update_config_hash_only=False Dec 2 03:18:31 localhost python3[72599]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:31 localhost python3[72615]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:18:34 localhost podman[72631]: 2025-12-02 08:18:34.055606379 +0000 UTC m=+0.064325884 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 2 03:18:34 localhost podman[72631]: 2025-12-02 08:18:34.267899519 +0000 UTC m=+0.276619014 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, version=17.1.12, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:18:34 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:18:37 localhost podman[72662]: 2025-12-02 08:18:37.088271118 +0000 UTC m=+0.096150935 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 2 03:18:37 localhost podman[72663]: 2025-12-02 08:18:37.140318338 +0000 UTC m=+0.145126140 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:37 localhost podman[72662]: 2025-12-02 08:18:37.153311513 +0000 UTC m=+0.161191340 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:37 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:18:37 localhost podman[72663]: 2025-12-02 08:18:37.200015777 +0000 UTC m=+0.204823609 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:18:37 localhost systemd[1]: tmp-crun.rcGtXz.mount: Deactivated successfully. Dec 2 03:18:37 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:18:37 localhost podman[72664]: 2025-12-02 08:18:37.221441864 +0000 UTC m=+0.226366349 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 03:18:37 localhost podman[72664]: 2025-12-02 08:18:37.278899633 +0000 UTC m=+0.283824068 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:18:37 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:18:38 localhost podman[72733]: 2025-12-02 08:18:38.073490141 +0000 UTC m=+0.079614249 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 03:18:38 localhost podman[72733]: 2025-12-02 08:18:38.472966579 +0000 UTC m=+0.479090687 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:18:38 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:18:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:18:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:18:42 localhost podman[72757]: 2025-12-02 08:18:42.132292879 +0000 UTC m=+0.134688974 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:18:42 localhost podman[72756]: 2025-12-02 08:18:42.097580168 +0000 UTC m=+0.103693819 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 03:18:42 localhost podman[72756]: 2025-12-02 08:18:42.178047374 +0000 UTC m=+0.184161035 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:18:42 localhost podman[72757]: 2025-12-02 08:18:42.182948396 +0000 UTC m=+0.185344521 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, version=17.1.12, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:18:42 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:18:42 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:18:48 localhost snmpd[69217]: empty variable list in _query Dec 2 03:18:48 localhost snmpd[69217]: empty variable list in _query Dec 2 03:18:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:18:49 localhost podman[72806]: 2025-12-02 08:18:49.067936694 +0000 UTC m=+0.075432290 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 2 03:18:49 localhost podman[72806]: 2025-12-02 08:18:49.083792438 +0000 UTC m=+0.091288004 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64) Dec 2 03:18:49 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:18:51 localhost podman[72825]: 2025-12-02 08:18:51.079212793 +0000 UTC m=+0.082121258 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 03:18:51 localhost podman[72825]: 2025-12-02 08:18:51.088370648 +0000 UTC m=+0.091279163 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:18:51 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:18:52 localhost sshd[72844]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:19:05 localhost systemd[1]: tmp-crun.h34PZI.mount: Deactivated successfully. Dec 2 03:19:05 localhost podman[72846]: 2025-12-02 08:19:05.091379991 +0000 UTC m=+0.091040932 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:19:05 localhost podman[72846]: 2025-12-02 08:19:05.299955201 +0000 UTC m=+0.299616102 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:19:05 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:19:08 localhost podman[72876]: 2025-12-02 08:19:08.077784837 +0000 UTC m=+0.081851144 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, version=17.1.12, build-date=2025-11-18T22:49:32Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:19:08 localhost podman[72876]: 2025-12-02 08:19:08.092015542 +0000 UTC m=+0.096081809 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:19:08 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:19:08 localhost podman[72878]: 2025-12-02 08:19:08.142924056 +0000 UTC m=+0.141806041 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, version=17.1.12, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:19:08 localhost systemd[1]: tmp-crun.YAKjVw.mount: Deactivated successfully. Dec 2 03:19:08 localhost podman[72877]: 2025-12-02 08:19:08.190792795 +0000 UTC m=+0.192582111 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:19:08 localhost podman[72878]: 2025-12-02 08:19:08.203906376 +0000 UTC m=+0.202788321 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:19:08 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:19:08 localhost podman[72877]: 2025-12-02 08:19:08.226932066 +0000 UTC m=+0.228721362 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:19:08 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:19:09 localhost podman[72947]: 2025-12-02 08:19:09.077979463 +0000 UTC m=+0.081689619 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc.) Dec 2 03:19:09 localhost podman[72947]: 2025-12-02 08:19:09.453746968 +0000 UTC m=+0.457457074 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:19:09 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:19:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:19:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:19:13 localhost podman[72970]: 2025-12-02 08:19:13.079812703 +0000 UTC m=+0.084123045 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:19:13 localhost systemd[1]: tmp-crun.hitcFA.mount: Deactivated successfully. Dec 2 03:19:13 localhost podman[72971]: 2025-12-02 08:19:13.150423894 +0000 UTC m=+0.150425311 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:19:13 localhost podman[72970]: 2025-12-02 08:19:13.153892523 +0000 UTC m=+0.158202895 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:19:13 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:19:13 localhost podman[72971]: 2025-12-02 08:19:13.174845848 +0000 UTC m=+0.174847275 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true) Dec 2 03:19:13 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:19:18 localhost sshd[73018]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:19:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:19:19 localhost podman[73020]: 2025-12-02 08:19:19.322484245 +0000 UTC m=+0.079719108 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 2 03:19:19 localhost podman[73020]: 2025-12-02 08:19:19.335932016 +0000 UTC m=+0.093166879 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Dec 2 03:19:19 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:19:22 localhost podman[73041]: 2025-12-02 08:19:22.084400232 +0000 UTC m=+0.077373404 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container) Dec 2 03:19:22 localhost podman[73041]: 2025-12-02 08:19:22.094706604 +0000 UTC m=+0.087679796 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Dec 2 03:19:22 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:19:33 localhost systemd[1]: tmp-crun.dwqlo0.mount: Deactivated successfully. Dec 2 03:19:33 localhost podman[73163]: 2025-12-02 08:19:33.413701087 +0000 UTC m=+0.093743796 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 03:19:33 localhost podman[73163]: 2025-12-02 08:19:33.550980916 +0000 UTC m=+0.231023575 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:19:35 localhost systemd[1]: tmp-crun.NL9gPW.mount: Deactivated successfully. Dec 2 03:19:35 localhost podman[73309]: 2025-12-02 08:19:35.684529667 +0000 UTC m=+0.106931009 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 2 03:19:35 localhost podman[73309]: 2025-12-02 08:19:35.857745521 +0000 UTC m=+0.280146813 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:19:35 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:19:39 localhost systemd[1]: tmp-crun.QvvgA2.mount: Deactivated successfully. Dec 2 03:19:39 localhost podman[73338]: 2025-12-02 08:19:39.09176757 +0000 UTC m=+0.092382763 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:19:39 localhost podman[73338]: 2025-12-02 08:19:39.100673469 +0000 UTC m=+0.101288622 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:19:39 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:19:39 localhost podman[73340]: 2025-12-02 08:19:39.145835683 +0000 UTC m=+0.140535131 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, vcs-type=git) Dec 2 03:19:39 localhost podman[73339]: 2025-12-02 08:19:39.193405892 +0000 UTC m=+0.190517826 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git) Dec 2 03:19:39 localhost podman[73340]: 2025-12-02 08:19:39.202770565 +0000 UTC m=+0.197470013 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=) Dec 2 03:19:39 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:19:39 localhost podman[73339]: 2025-12-02 08:19:39.21889011 +0000 UTC m=+0.216002004 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=) Dec 2 03:19:39 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:19:40 localhost systemd[1]: tmp-crun.cv4XJR.mount: Deactivated successfully. Dec 2 03:19:40 localhost podman[73412]: 2025-12-02 08:19:40.084930146 +0000 UTC m=+0.092762775 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:19:40 localhost podman[73412]: 2025-12-02 08:19:40.454917281 +0000 UTC m=+0.462749920 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:19:40 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:19:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:19:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:19:44 localhost podman[73437]: 2025-12-02 08:19:44.087050294 +0000 UTC m=+0.092375244 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 03:19:44 localhost systemd[1]: tmp-crun.admLKV.mount: Deactivated successfully. Dec 2 03:19:44 localhost podman[73437]: 2025-12-02 08:19:44.139876699 +0000 UTC m=+0.145201608 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, container_name=ovn_metadata_agent) Dec 2 03:19:44 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:19:44 localhost podman[73438]: 2025-12-02 08:19:44.14056638 +0000 UTC m=+0.142876685 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_controller) Dec 2 03:19:44 localhost podman[73438]: 2025-12-02 08:19:44.220564355 +0000 UTC m=+0.222874690 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:19:44 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:19:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:19:50 localhost podman[73484]: 2025-12-02 08:19:50.067581628 +0000 UTC m=+0.072638735 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Dec 2 03:19:50 localhost podman[73484]: 2025-12-02 08:19:50.078974065 +0000 UTC m=+0.084031182 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:19:50 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:19:51 localhost sshd[73503]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:19:53 localhost podman[73505]: 2025-12-02 08:19:53.074761224 +0000 UTC m=+0.080500201 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:19:53 localhost podman[73505]: 2025-12-02 08:19:53.082980342 +0000 UTC m=+0.088719339 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z) Dec 2 03:19:53 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:19:57 localhost sshd[73525]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:20:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:20:06 localhost podman[73527]: 2025-12-02 08:20:06.077776609 +0000 UTC m=+0.084709243 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Dec 2 03:20:06 localhost podman[73527]: 2025-12-02 08:20:06.292977447 +0000 UTC m=+0.299910131 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:20:06 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:20:07 localhost sshd[73556]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:20:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:20:10 localhost podman[73560]: 2025-12-02 08:20:10.090020334 +0000 UTC m=+0.083310129 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:20:10 localhost systemd[1]: tmp-crun.LGlGvB.mount: Deactivated successfully. Dec 2 03:20:10 localhost podman[73559]: 2025-12-02 08:20:10.141038672 +0000 UTC m=+0.138396714 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:20:10 localhost podman[73560]: 2025-12-02 08:20:10.14478874 +0000 UTC m=+0.138078545 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:20:10 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:20:10 localhost podman[73559]: 2025-12-02 08:20:10.163524076 +0000 UTC m=+0.160882138 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, release=1761123044) Dec 2 03:20:10 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:20:10 localhost podman[73558]: 2025-12-02 08:20:10.181351234 +0000 UTC m=+0.183977481 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:20:10 localhost podman[73558]: 2025-12-02 08:20:10.19177039 +0000 UTC m=+0.194396647 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:20:10 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:20:11 localhost podman[73628]: 2025-12-02 08:20:11.071290578 +0000 UTC m=+0.075579147 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute) Dec 2 03:20:11 localhost podman[73628]: 2025-12-02 08:20:11.402799979 +0000 UTC m=+0.407088558 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:20:11 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:20:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:20:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:20:15 localhost systemd[1]: tmp-crun.4LTOtR.mount: Deactivated successfully. Dec 2 03:20:15 localhost podman[73652]: 2025-12-02 08:20:15.09075552 +0000 UTC m=+0.094765508 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:20:15 localhost podman[73653]: 2025-12-02 08:20:15.14246765 +0000 UTC m=+0.144908599 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 2 03:20:15 localhost podman[73653]: 2025-12-02 08:20:15.166019527 +0000 UTC m=+0.168460516 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:20:15 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:20:15 localhost podman[73652]: 2025-12-02 08:20:15.218135119 +0000 UTC m=+0.222145117 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:20:15 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:20:16 localhost systemd[1]: tmp-crun.KWfnNJ.mount: Deactivated successfully. Dec 2 03:20:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:20:21 localhost podman[73701]: 2025-12-02 08:20:21.070510979 +0000 UTC m=+0.074334679 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:51:28Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 03:20:21 localhost podman[73701]: 2025-12-02 08:20:21.106881118 +0000 UTC m=+0.110704798 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:20:21 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:20:24 localhost podman[73721]: 2025-12-02 08:20:24.07370397 +0000 UTC m=+0.078502049 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Dec 2 03:20:24 localhost podman[73721]: 2025-12-02 08:20:24.083013881 +0000 UTC m=+0.087811980 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1761123044, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:20:24 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:20:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:20:37 localhost podman[73805]: 2025-12-02 08:20:37.060370872 +0000 UTC m=+0.068649440 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:20:37 localhost podman[73805]: 2025-12-02 08:20:37.236919119 +0000 UTC m=+0.245197717 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 2 03:20:37 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:20:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:20:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:20:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:20:41 localhost systemd[1]: tmp-crun.O5nO6l.mount: Deactivated successfully. Dec 2 03:20:41 localhost podman[73848]: 2025-12-02 08:20:41.083995215 +0000 UTC m=+0.082912667 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 03:20:41 localhost podman[73847]: 2025-12-02 08:20:41.093068729 +0000 UTC m=+0.096339537 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 2 03:20:41 localhost podman[73847]: 2025-12-02 08:20:41.131919295 +0000 UTC m=+0.135190073 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:20:41 localhost podman[73848]: 2025-12-02 08:20:41.140783323 +0000 UTC m=+0.139700765 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, container_name=ceilometer_agent_compute, version=17.1.12, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc.) Dec 2 03:20:41 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:20:41 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:20:41 localhost podman[73849]: 2025-12-02 08:20:41.224872966 +0000 UTC m=+0.225514232 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:20:41 localhost podman[73849]: 2025-12-02 08:20:41.259099577 +0000 UTC m=+0.259740813 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:20:41 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:20:42 localhost podman[73918]: 2025-12-02 08:20:42.088429453 +0000 UTC m=+0.084744754 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:20:42 localhost podman[73918]: 2025-12-02 08:20:42.49488135 +0000 UTC m=+0.491196691 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64) Dec 2 03:20:42 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:20:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:20:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:20:46 localhost systemd[1]: tmp-crun.wUpiQt.mount: Deactivated successfully. Dec 2 03:20:46 localhost podman[73941]: 2025-12-02 08:20:46.072952971 +0000 UTC m=+0.079137990 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, distribution-scope=public, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 2 03:20:46 localhost podman[73941]: 2025-12-02 08:20:46.115351968 +0000 UTC m=+0.121536917 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:20:46 localhost podman[73942]: 2025-12-02 08:20:46.123247276 +0000 UTC m=+0.122705614 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 2 03:20:46 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:20:46 localhost podman[73942]: 2025-12-02 08:20:46.144760669 +0000 UTC m=+0.144219007 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:20:46 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:20:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:20:51 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:20:52 localhost recover_tripleo_nova_virtqemud[73991]: 61907 Dec 2 03:20:52 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:20:52 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:20:52 localhost podman[73989]: 2025-12-02 08:20:52.075303907 +0000 UTC m=+0.076960020 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:20:52 localhost podman[73989]: 2025-12-02 08:20:52.08434955 +0000 UTC m=+0.086005633 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:20:52 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:20:55 localhost podman[74011]: 2025-12-02 08:20:55.081898536 +0000 UTC m=+0.085827019 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com) Dec 2 03:20:55 localhost podman[74011]: 2025-12-02 08:20:55.118228962 +0000 UTC m=+0.122157435 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible) Dec 2 03:20:55 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:21:08 localhost podman[74028]: 2025-12-02 08:21:08.089674038 +0000 UTC m=+0.086845220 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:21:08 localhost podman[74028]: 2025-12-02 08:21:08.299686044 +0000 UTC m=+0.296857166 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:08 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:21:08 localhost sshd[74057]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:21:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:21:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:21:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:21:12 localhost podman[74061]: 2025-12-02 08:21:12.100971164 +0000 UTC m=+0.095116889 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 2 03:21:12 localhost podman[74059]: 2025-12-02 08:21:12.069511708 +0000 UTC m=+0.068418032 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12) Dec 2 03:21:12 localhost podman[74060]: 2025-12-02 08:21:12.120193026 +0000 UTC m=+0.116006834 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:21:12 localhost podman[74059]: 2025-12-02 08:21:12.152146486 +0000 UTC m=+0.151052860 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:21:12 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:21:12 localhost podman[74060]: 2025-12-02 08:21:12.204243048 +0000 UTC m=+0.200056836 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, architecture=x86_64) Dec 2 03:21:12 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:21:12 localhost podman[74061]: 2025-12-02 08:21:12.258749944 +0000 UTC m=+0.252895669 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:21:12 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:21:13 localhost podman[74133]: 2025-12-02 08:21:13.094927206 +0000 UTC m=+0.095892785 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:21:13 localhost podman[74133]: 2025-12-02 08:21:13.471875507 +0000 UTC m=+0.472841076 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:21:13 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:21:16 localhost python3[74203]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:21:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:21:16 localhost systemd[1]: tmp-crun.KKXNrS.mount: Deactivated successfully. Dec 2 03:21:16 localhost podman[74249]: 2025-12-02 08:21:16.60703118 +0000 UTC m=+0.096119231 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=) Dec 2 03:21:16 localhost systemd[1]: tmp-crun.bx80YS.mount: Deactivated successfully. Dec 2 03:21:16 localhost podman[74250]: 2025-12-02 08:21:16.6536517 +0000 UTC m=+0.140051806 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:16 localhost python3[74248]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663675.8580468-113049-90246068038572/source _original_basename=tmpzfs4mg0j follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:16 localhost podman[74250]: 2025-12-02 08:21:16.675391471 +0000 UTC m=+0.161791567 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64) Dec 2 03:21:16 localhost podman[74249]: 2025-12-02 08:21:16.68718142 +0000 UTC m=+0.176269411 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 2 03:21:16 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:21:16 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:21:17 localhost python3[74325]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:19 localhost ansible-async_wrapper.py[74497]: Invoked with 980276114034 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663678.8946633-113244-167853700649424/AnsiballZ_command.py _ Dec 2 03:21:19 localhost ansible-async_wrapper.py[74500]: Starting module and watcher Dec 2 03:21:19 localhost ansible-async_wrapper.py[74500]: Start watching 74501 (3600) Dec 2 03:21:19 localhost ansible-async_wrapper.py[74501]: Start module (74501) Dec 2 03:21:19 localhost ansible-async_wrapper.py[74497]: Return async_wrapper task started. Dec 2 03:21:19 localhost python3[74521]: ansible-ansible.legacy.async_status Invoked with jid=980276114034.74497 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:21:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:21:22 localhost podman[74576]: 2025-12-02 08:21:22.421526745 +0000 UTC m=+0.103241854 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:21:22 localhost podman[74576]: 2025-12-02 08:21:22.432117457 +0000 UTC m=+0.113832526 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 2 03:21:22 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:21:23 localhost puppet-user[74520]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:21:23 localhost puppet-user[74520]: (file: /etc/puppet/hiera.yaml) Dec 2 03:21:23 localhost puppet-user[74520]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:21:23 localhost puppet-user[74520]: (file & line not available) Dec 2 03:21:23 localhost puppet-user[74520]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:21:23 localhost puppet-user[74520]: (file & line not available) Dec 2 03:21:23 localhost puppet-user[74520]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74520]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74520]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74520]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74520]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74520]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74520]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74520]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:21:23 localhost puppet-user[74520]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.29 seconds Dec 2 03:21:24 localhost puppet-user[74520]: Notice: Applied catalog in 0.32 seconds Dec 2 03:21:24 localhost puppet-user[74520]: Application: Dec 2 03:21:24 localhost puppet-user[74520]: Initial environment: production Dec 2 03:21:24 localhost puppet-user[74520]: Converged environment: production Dec 2 03:21:24 localhost puppet-user[74520]: Run mode: user Dec 2 03:21:24 localhost puppet-user[74520]: Changes: Dec 2 03:21:24 localhost puppet-user[74520]: Events: Dec 2 03:21:24 localhost puppet-user[74520]: Resources: Dec 2 03:21:24 localhost puppet-user[74520]: Total: 19 Dec 2 03:21:24 localhost puppet-user[74520]: Time: Dec 2 03:21:24 localhost puppet-user[74520]: Filebucket: 0.00 Dec 2 03:21:24 localhost puppet-user[74520]: Package: 0.00 Dec 2 03:21:24 localhost puppet-user[74520]: Schedule: 0.00 Dec 2 03:21:24 localhost puppet-user[74520]: Exec: 0.01 Dec 2 03:21:24 localhost puppet-user[74520]: Augeas: 0.01 Dec 2 03:21:24 localhost puppet-user[74520]: File: 0.02 Dec 2 03:21:24 localhost puppet-user[74520]: Service: 0.08 Dec 2 03:21:24 localhost puppet-user[74520]: Transaction evaluation: 0.31 Dec 2 03:21:24 localhost puppet-user[74520]: Catalog application: 0.32 Dec 2 03:21:24 localhost puppet-user[74520]: Config retrieval: 0.36 Dec 2 03:21:24 localhost puppet-user[74520]: Last run: 1764663684 Dec 2 03:21:24 localhost puppet-user[74520]: Total: 0.33 Dec 2 03:21:24 localhost puppet-user[74520]: Version: Dec 2 03:21:24 localhost puppet-user[74520]: Config: 1764663683 Dec 2 03:21:24 localhost puppet-user[74520]: Puppet: 7.10.0 Dec 2 03:21:24 localhost ansible-async_wrapper.py[74501]: Module complete (74501) Dec 2 03:21:24 localhost ansible-async_wrapper.py[74500]: Done in kid B. Dec 2 03:21:25 localhost sshd[74664]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:21:25 localhost podman[74666]: 2025-12-02 08:21:25.671202614 +0000 UTC m=+0.069215049 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, vcs-type=git) Dec 2 03:21:25 localhost podman[74666]: 2025-12-02 08:21:25.682436236 +0000 UTC m=+0.080448621 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, distribution-scope=public, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1) Dec 2 03:21:25 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:21:30 localhost python3[74702]: ansible-ansible.legacy.async_status Invoked with jid=980276114034.74497 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:21:30 localhost python3[74718]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:21:31 localhost python3[74734]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:31 localhost python3[74784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:32 localhost python3[74802]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpncl_mgl3 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:21:32 localhost python3[74832]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:33 localhost python3[74937]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:21:34 localhost python3[74956]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:35 localhost python3[74988]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:36 localhost python3[75038]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:36 localhost python3[75056]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:36 localhost python3[75118]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:37 localhost python3[75136]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:37 localhost python3[75227]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:37 localhost python3[75257]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:38 localhost python3[75339]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:21:38 localhost systemd[1]: tmp-crun.32OI8V.mount: Deactivated successfully. Dec 2 03:21:38 localhost podman[75358]: 2025-12-02 08:21:38.619964038 +0000 UTC m=+0.103948076 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:21:38 localhost python3[75357]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:38 localhost podman[75358]: 2025-12-02 08:21:38.819299219 +0000 UTC m=+0.303283227 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4) Dec 2 03:21:38 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:21:39 localhost python3[75431]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:21:39 localhost systemd[1]: Reloading. Dec 2 03:21:39 localhost systemd-rc-local-generator[75452]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:21:39 localhost systemd-sysv-generator[75456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:21:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:21:40 localhost python3[75517]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:40 localhost python3[75535]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:40 localhost python3[75597]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:41 localhost python3[75615]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:41 localhost python3[75645]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:21:41 localhost systemd[1]: Reloading. Dec 2 03:21:41 localhost systemd-rc-local-generator[75669]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:21:41 localhost systemd-sysv-generator[75676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:21:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:21:42 localhost systemd[1]: Starting Create netns directory... Dec 2 03:21:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:21:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:21:42 localhost systemd[1]: Finished Create netns directory. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:21:42 localhost podman[75704]: 2025-12-02 08:21:42.553706545 +0000 UTC m=+0.086025444 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:21:42 localhost podman[75704]: 2025-12-02 08:21:42.60752482 +0000 UTC m=+0.139843719 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 03:21:42 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:21:42 localhost python3[75702]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:21:42 localhost podman[75705]: 2025-12-02 08:21:42.658149526 +0000 UTC m=+0.184845309 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:21:42 localhost podman[75703]: 2025-12-02 08:21:42.607394566 +0000 UTC m=+0.137223668 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:21:42 localhost podman[75705]: 2025-12-02 08:21:42.706232441 +0000 UTC m=+0.232928214 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:21:42 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:21:42 localhost podman[75703]: 2025-12-02 08:21:42.788692943 +0000 UTC m=+0.318522045 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond) Dec 2 03:21:42 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:21:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:21:44 localhost podman[75819]: 2025-12-02 08:21:44.069733162 +0000 UTC m=+0.077037313 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:21:44 localhost podman[75819]: 2025-12-02 08:21:44.401101188 +0000 UTC m=+0.408405359 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.openshift.expose-services=) Dec 2 03:21:44 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:21:44 localhost python3[75858]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:21:44 localhost podman[75896]: 2025-12-02 08:21:44.997391178 +0000 UTC m=+0.085551590 container create 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5) Dec 2 03:21:45 localhost systemd[1]: Started libpod-conmon-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope. Dec 2 03:21:45 localhost podman[75896]: 2025-12-02 08:21:44.94604978 +0000 UTC m=+0.034210222 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Started libcrun container. Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:21:45 localhost podman[75896]: 2025-12-02 08:21:45.095359315 +0000 UTC m=+0.183519767 container init 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:21:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:21:45 localhost podman[75896]: 2025-12-02 08:21:45.134661476 +0000 UTC m=+0.222821878 container start 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, release=1761123044, tcib_managed=true) Dec 2 03:21:45 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 2 03:21:45 localhost python3[75858]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=d89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:21:45 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:21:45 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:21:45 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:21:45 localhost podman[75917]: 2025-12-02 08:21:45.265804701 +0000 UTC m=+0.124581451 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 2 03:21:45 localhost podman[75917]: 2025-12-02 08:21:45.330703674 +0000 UTC m=+0.189480444 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 2 03:21:45 localhost podman[75917]: unhealthy Dec 2 03:21:45 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:21:45 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:21:45 localhost systemd[75931]: Queued start job for default target Main User Target. Dec 2 03:21:45 localhost systemd[75931]: Created slice User Application Slice. Dec 2 03:21:45 localhost systemd[75931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:21:45 localhost systemd[75931]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:21:45 localhost systemd[75931]: Reached target Paths. Dec 2 03:21:45 localhost systemd[75931]: Reached target Timers. Dec 2 03:21:45 localhost systemd[75931]: Starting D-Bus User Message Bus Socket... Dec 2 03:21:45 localhost systemd[75931]: Starting Create User's Volatile Files and Directories... Dec 2 03:21:45 localhost systemd[75931]: Finished Create User's Volatile Files and Directories. Dec 2 03:21:45 localhost systemd[75931]: Listening on D-Bus User Message Bus Socket. Dec 2 03:21:45 localhost systemd[75931]: Reached target Sockets. Dec 2 03:21:45 localhost systemd[75931]: Reached target Basic System. Dec 2 03:21:45 localhost systemd[75931]: Reached target Main User Target. Dec 2 03:21:45 localhost systemd[75931]: Startup finished in 147ms. Dec 2 03:21:45 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:21:45 localhost systemd[1]: Started Session c10 of User root. Dec 2 03:21:45 localhost systemd[1]: session-c10.scope: Deactivated successfully. Dec 2 03:21:45 localhost podman[76020]: 2025-12-02 08:21:45.624837254 +0000 UTC m=+0.066426301 container create 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, release=1761123044) Dec 2 03:21:45 localhost podman[76020]: 2025-12-02 08:21:45.58896457 +0000 UTC m=+0.030553627 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Started libpod-conmon-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope. Dec 2 03:21:45 localhost systemd[1]: Started libcrun container. Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost podman[76020]: 2025-12-02 08:21:45.720722406 +0000 UTC m=+0.162311483 container init 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, architecture=x86_64, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:21:45 localhost podman[76020]: 2025-12-02 08:21:45.728768508 +0000 UTC m=+0.170357585 container start 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step5, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_wait_for_compute_service, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12) Dec 2 03:21:45 localhost podman[76020]: 2025-12-02 08:21:45.729103808 +0000 UTC m=+0.170692845 container attach 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_wait_for_compute_service, vcs-type=git, release=1761123044, name=rhosp17/openstack-nova-compute) Dec 2 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:21:47 localhost podman[76044]: 2025-12-02 08:21:47.079544501 +0000 UTC m=+0.080811272 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:21:47 localhost podman[76044]: 2025-12-02 08:21:47.124391995 +0000 UTC m=+0.125658786 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 03:21:47 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:21:47 localhost podman[76045]: 2025-12-02 08:21:47.130517927 +0000 UTC m=+0.127232985 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Dec 2 03:21:47 localhost podman[76045]: 2025-12-02 08:21:47.209972214 +0000 UTC m=+0.206687222 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:21:47 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:21:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:21:53 localhost systemd[1]: tmp-crun.frq9b8.mount: Deactivated successfully. Dec 2 03:21:53 localhost podman[76092]: 2025-12-02 08:21:53.074069762 +0000 UTC m=+0.081263136 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12) Dec 2 03:21:53 localhost podman[76092]: 2025-12-02 08:21:53.083713233 +0000 UTC m=+0.090906617 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Dec 2 03:21:53 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:21:55 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:21:55 localhost systemd[75931]: Activating special unit Exit the Session... Dec 2 03:21:55 localhost systemd[75931]: Stopped target Main User Target. Dec 2 03:21:55 localhost systemd[75931]: Stopped target Basic System. Dec 2 03:21:55 localhost systemd[75931]: Stopped target Paths. Dec 2 03:21:55 localhost systemd[75931]: Stopped target Sockets. Dec 2 03:21:55 localhost systemd[75931]: Stopped target Timers. Dec 2 03:21:55 localhost systemd[75931]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:21:55 localhost systemd[75931]: Closed D-Bus User Message Bus Socket. Dec 2 03:21:55 localhost systemd[75931]: Stopped Create User's Volatile Files and Directories. Dec 2 03:21:55 localhost systemd[75931]: Removed slice User Application Slice. Dec 2 03:21:55 localhost systemd[75931]: Reached target Shutdown. Dec 2 03:21:55 localhost systemd[75931]: Finished Exit the Session. Dec 2 03:21:55 localhost systemd[75931]: Reached target Exit the Session. Dec 2 03:21:55 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:21:55 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:21:55 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:21:55 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:21:55 localhost podman[76111]: 2025-12-02 08:21:55.819645597 +0000 UTC m=+0.067432602 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z) Dec 2 03:21:55 localhost podman[76111]: 2025-12-02 08:21:55.858923577 +0000 UTC m=+0.106710632 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true) Dec 2 03:21:55 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:22:09 localhost podman[76131]: 2025-12-02 08:22:09.075861776 +0000 UTC m=+0.083875677 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:22:09 localhost podman[76131]: 2025-12-02 08:22:09.278103058 +0000 UTC m=+0.286116929 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible) Dec 2 03:22:09 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:22:12 localhost systemd[1]: session-27.scope: Deactivated successfully. Dec 2 03:22:12 localhost systemd[1]: session-27.scope: Consumed 2.951s CPU time. Dec 2 03:22:12 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Dec 2 03:22:12 localhost systemd-logind[760]: Removed session 27. Dec 2 03:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:22:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:22:13 localhost podman[76160]: 2025-12-02 08:22:13.097609688 +0000 UTC m=+0.094173529 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:22:13 localhost podman[76159]: 2025-12-02 08:22:13.145596701 +0000 UTC m=+0.142946167 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:22:13 localhost podman[76160]: 2025-12-02 08:22:13.154742618 +0000 UTC m=+0.151306409 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:22:13 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:22:13 localhost podman[76161]: 2025-12-02 08:22:13.07178811 +0000 UTC m=+0.070153277 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, version=17.1.12) Dec 2 03:22:13 localhost podman[76159]: 2025-12-02 08:22:13.205728584 +0000 UTC m=+0.203078050 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Dec 2 03:22:13 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:22:13 localhost podman[76161]: 2025-12-02 08:22:13.255862084 +0000 UTC m=+0.254227221 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:22:13 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:22:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:22:15 localhost podman[76229]: 2025-12-02 08:22:15.057130953 +0000 UTC m=+0.067350320 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:22:15 localhost podman[76229]: 2025-12-02 08:22:15.369382559 +0000 UTC m=+0.379601876 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git) Dec 2 03:22:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:22:15 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:22:15 localhost podman[76253]: 2025-12-02 08:22:15.475609525 +0000 UTC m=+0.070429756 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:22:15 localhost podman[76253]: 2025-12-02 08:22:15.529553483 +0000 UTC m=+0.124373714 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.) Dec 2 03:22:15 localhost podman[76253]: unhealthy Dec 2 03:22:15 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:22:15 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:22:18 localhost podman[76276]: 2025-12-02 08:22:18.078952247 +0000 UTC m=+0.081937847 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64) Dec 2 03:22:18 localhost podman[76275]: 2025-12-02 08:22:18.129391447 +0000 UTC m=+0.136213717 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:22:18 localhost podman[76276]: 2025-12-02 08:22:18.153176421 +0000 UTC m=+0.156161941 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 2 03:22:18 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:22:18 localhost podman[76275]: 2025-12-02 08:22:18.190823569 +0000 UTC m=+0.197645769 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:22:18 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:22:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:22:24 localhost podman[76323]: 2025-12-02 08:22:24.066196781 +0000 UTC m=+0.073563644 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:22:24 localhost podman[76323]: 2025-12-02 08:22:24.074197911 +0000 UTC m=+0.081564754 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 2 03:22:24 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:22:26 localhost podman[76343]: 2025-12-02 08:22:26.07317927 +0000 UTC m=+0.081274205 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:22:26 localhost podman[76343]: 2025-12-02 08:22:26.112909084 +0000 UTC m=+0.121004049 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container) Dec 2 03:22:26 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:22:40 localhost podman[76424]: 2025-12-02 08:22:40.067152361 +0000 UTC m=+0.074849085 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:22:40 localhost podman[76424]: 2025-12-02 08:22:40.255138436 +0000 UTC m=+0.262835140 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:22:40 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:22:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:22:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:22:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:22:44 localhost podman[76468]: 2025-12-02 08:22:44.085275259 +0000 UTC m=+0.088443231 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 2 03:22:44 localhost podman[76469]: 2025-12-02 08:22:44.143517902 +0000 UTC m=+0.144668461 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=ceilometer_agent_compute, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4) Dec 2 03:22:44 localhost podman[76469]: 2025-12-02 08:22:44.193250339 +0000 UTC m=+0.194400918 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1) Dec 2 03:22:44 localhost podman[76470]: 2025-12-02 08:22:44.198188764 +0000 UTC m=+0.196942687 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:12:45Z) Dec 2 03:22:44 localhost podman[76468]: 2025-12-02 08:22:44.220627886 +0000 UTC m=+0.223795898 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:22:44 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:22:44 localhost podman[76470]: 2025-12-02 08:22:44.254155996 +0000 UTC m=+0.252909909 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12) Dec 2 03:22:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:22:44 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:22:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:22:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:22:46 localhost podman[76542]: 2025-12-02 08:22:46.073502161 +0000 UTC m=+0.076786995 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:22:46 localhost systemd[1]: tmp-crun.8bJxdH.mount: Deactivated successfully. Dec 2 03:22:46 localhost podman[76543]: 2025-12-02 08:22:46.137536515 +0000 UTC m=+0.140166619 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:22:46 localhost podman[76542]: 2025-12-02 08:22:46.189864104 +0000 UTC m=+0.193148918 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step5, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:22:46 localhost podman[76542]: unhealthy Dec 2 03:22:46 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:22:46 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:22:46 localhost podman[76543]: 2025-12-02 08:22:46.506035303 +0000 UTC m=+0.508665407 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target) Dec 2 03:22:46 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:22:49 localhost systemd[1]: tmp-crun.6N2uDL.mount: Deactivated successfully. Dec 2 03:22:49 localhost podman[76588]: 2025-12-02 08:22:49.088596564 +0000 UTC m=+0.090017130 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044) Dec 2 03:22:49 localhost systemd[1]: tmp-crun.q783jo.mount: Deactivated successfully. Dec 2 03:22:49 localhost podman[76587]: 2025-12-02 08:22:49.140948133 +0000 UTC m=+0.145069993 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=) Dec 2 03:22:49 localhost podman[76588]: 2025-12-02 08:22:49.193203329 +0000 UTC m=+0.194623885 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller) Dec 2 03:22:49 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:22:49 localhost podman[76587]: 2025-12-02 08:22:49.249730838 +0000 UTC m=+0.253852688 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true) Dec 2 03:22:49 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:22:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:22:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:22:55 localhost recover_tripleo_nova_virtqemud[76641]: 61907 Dec 2 03:22:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:22:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:22:55 localhost systemd[1]: tmp-crun.EOt9wN.mount: Deactivated successfully. Dec 2 03:22:55 localhost podman[76637]: 2025-12-02 08:22:55.085611563 +0000 UTC m=+0.080456281 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 03:22:55 localhost podman[76637]: 2025-12-02 08:22:55.094329886 +0000 UTC m=+0.089174664 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container) Dec 2 03:22:55 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:22:57 localhost podman[76657]: 2025-12-02 08:22:57.078708307 +0000 UTC m=+0.082097121 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:22:57 localhost podman[76657]: 2025-12-02 08:22:57.08520827 +0000 UTC m=+0.088597104 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64) Dec 2 03:22:57 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:23:11 localhost podman[76677]: 2025-12-02 08:23:11.083404599 +0000 UTC m=+0.088740399 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:23:11 localhost podman[76677]: 2025-12-02 08:23:11.274834553 +0000 UTC m=+0.280170273 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc.) Dec 2 03:23:11 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:23:15 localhost podman[76707]: 2025-12-02 08:23:15.087864572 +0000 UTC m=+0.085822909 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:23:15 localhost podman[76707]: 2025-12-02 08:23:15.121931468 +0000 UTC m=+0.119889805 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 2 03:23:15 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:23:15 localhost podman[76706]: 2025-12-02 08:23:15.143521635 +0000 UTC m=+0.146449178 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, tcib_managed=true, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4) Dec 2 03:23:15 localhost podman[76706]: 2025-12-02 08:23:15.18076685 +0000 UTC m=+0.183694353 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4) Dec 2 03:23:15 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:23:15 localhost podman[76708]: 2025-12-02 08:23:15.197816314 +0000 UTC m=+0.193987175 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com) Dec 2 03:23:15 localhost podman[76708]: 2025-12-02 08:23:15.226224983 +0000 UTC m=+0.222395844 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:23:15 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:23:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:23:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:23:17 localhost systemd[1]: tmp-crun.wihRoe.mount: Deactivated successfully. Dec 2 03:23:17 localhost podman[76782]: 2025-12-02 08:23:17.082995 +0000 UTC m=+0.086678015 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:23:17 localhost podman[76781]: 2025-12-02 08:23:17.128710441 +0000 UTC m=+0.134289736 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_compute, url=https://www.redhat.com, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5) Dec 2 03:23:17 localhost podman[76781]: 2025-12-02 08:23:17.188908346 +0000 UTC m=+0.194487691 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_compute, release=1761123044, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64) Dec 2 03:23:17 localhost podman[76781]: unhealthy Dec 2 03:23:17 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:23:17 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:23:17 localhost podman[76782]: 2025-12-02 08:23:17.434752313 +0000 UTC m=+0.438435258 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 03:23:17 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:23:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:23:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:23:20 localhost systemd[1]: tmp-crun.ukVzaB.mount: Deactivated successfully. Dec 2 03:23:20 localhost podman[76827]: 2025-12-02 08:23:20.08550802 +0000 UTC m=+0.087655966 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:23:20 localhost podman[76827]: 2025-12-02 08:23:20.132874253 +0000 UTC m=+0.135022199 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 2 03:23:20 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:23:20 localhost podman[76826]: 2025-12-02 08:23:20.133604096 +0000 UTC m=+0.137639821 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, distribution-scope=public) Dec 2 03:23:20 localhost podman[76826]: 2025-12-02 08:23:20.221975752 +0000 UTC m=+0.226011467 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 2 03:23:20 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:23:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:23:26 localhost podman[76874]: 2025-12-02 08:23:26.086971065 +0000 UTC m=+0.088746958 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:23:26 localhost podman[76874]: 2025-12-02 08:23:26.098932017 +0000 UTC m=+0.100707960 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, tcib_managed=true, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Dec 2 03:23:26 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:23:28 localhost podman[76894]: 2025-12-02 08:23:28.079831478 +0000 UTC m=+0.084389850 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git) Dec 2 03:23:28 localhost podman[76894]: 2025-12-02 08:23:28.092736588 +0000 UTC m=+0.097295010 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:23:28 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:23:33 localhost sshd[76913]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:23:41 localhost systemd[1]: tmp-crun.OMj9f9.mount: Deactivated successfully. Dec 2 03:23:41 localhost podman[76992]: 2025-12-02 08:23:41.928343453 +0000 UTC m=+0.088219023 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 2 03:23:42 localhost podman[76992]: 2025-12-02 08:23:42.126925338 +0000 UTC m=+0.286800948 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:23:42 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:23:46 localhost podman[77024]: 2025-12-02 08:23:46.09135564 +0000 UTC m=+0.086712249 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:23:46 localhost podman[77024]: 2025-12-02 08:23:46.118736694 +0000 UTC m=+0.114093233 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=) Dec 2 03:23:46 localhost podman[77022]: 2025-12-02 08:23:46.130576892 +0000 UTC m=+0.129644250 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:23:46 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:23:46 localhost podman[77023]: 2025-12-02 08:23:46.182057004 +0000 UTC m=+0.180092372 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, release=1761123044, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:23:46 localhost podman[77022]: 2025-12-02 08:23:46.215242139 +0000 UTC m=+0.214309477 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:23:46 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:23:46 localhost podman[77023]: 2025-12-02 08:23:46.235856775 +0000 UTC m=+0.233892133 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:23:46 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:23:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:23:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:23:48 localhost podman[77094]: 2025-12-02 08:23:48.069582793 +0000 UTC m=+0.075268073 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:23:48 localhost systemd[1]: tmp-crun.QYFkG3.mount: Deactivated successfully. Dec 2 03:23:48 localhost podman[77093]: 2025-12-02 08:23:48.131045129 +0000 UTC m=+0.137896473 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 2 03:23:48 localhost podman[77093]: 2025-12-02 08:23:48.187314672 +0000 UTC m=+0.194166006 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public) Dec 2 03:23:48 localhost podman[77093]: unhealthy Dec 2 03:23:48 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:23:48 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:23:48 localhost podman[77094]: 2025-12-02 08:23:48.447980381 +0000 UTC m=+0.453665651 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 2 03:23:48 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:23:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:23:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:23:51 localhost podman[77138]: 2025-12-02 08:23:51.071411653 +0000 UTC m=+0.071108981 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:23:51 localhost podman[77137]: 2025-12-02 08:23:51.121160524 +0000 UTC m=+0.122304244 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:14:25Z, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:23:51 localhost podman[77138]: 2025-12-02 08:23:51.144402997 +0000 UTC m=+0.144100365 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:23:51 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:23:51 localhost podman[77137]: 2025-12-02 08:23:51.213592049 +0000 UTC m=+0.214735789 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 2 03:23:51 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:23:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:23:57 localhost podman[77185]: 2025-12-02 08:23:57.082813606 +0000 UTC m=+0.085944306 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:23:57 localhost podman[77185]: 2025-12-02 08:23:57.094774198 +0000 UTC m=+0.097904878 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:23:57 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:23:59 localhost podman[77207]: 2025-12-02 08:23:59.080710429 +0000 UTC m=+0.088565014 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Dec 2 03:23:59 localhost podman[77207]: 2025-12-02 08:23:59.090252009 +0000 UTC m=+0.098106574 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Dec 2 03:23:59 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:24:13 localhost systemd[1]: tmp-crun.vr9DGF.mount: Deactivated successfully. Dec 2 03:24:13 localhost podman[77226]: 2025-12-02 08:24:13.079027044 +0000 UTC m=+0.085598477 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 2 03:24:13 localhost podman[77226]: 2025-12-02 08:24:13.304533569 +0000 UTC m=+0.311105032 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 2 03:24:13 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:24:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:24:15 localhost recover_tripleo_nova_virtqemud[77256]: 61907 Dec 2 03:24:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:24:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:24:17 localhost podman[77259]: 2025-12-02 08:24:17.077777783 +0000 UTC m=+0.070968675 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:24:17 localhost podman[77259]: 2025-12-02 08:24:17.106719994 +0000 UTC m=+0.099910846 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:24:17 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:24:17 localhost podman[77258]: 2025-12-02 08:24:17.183119729 +0000 UTC m=+0.177310611 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044) Dec 2 03:24:17 localhost podman[77257]: 2025-12-02 08:24:17.235349223 +0000 UTC m=+0.233096530 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64) Dec 2 03:24:17 localhost podman[77258]: 2025-12-02 08:24:17.258181154 +0000 UTC m=+0.252372026 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:24:17 localhost podman[77257]: 2025-12-02 08:24:17.266365024 +0000 UTC m=+0.264112361 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:24:17 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:24:17 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:24:19 localhost systemd[1]: tmp-crun.QfqZMP.mount: Deactivated successfully. Dec 2 03:24:19 localhost podman[77328]: 2025-12-02 08:24:19.081222878 +0000 UTC m=+0.084665318 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:24:19 localhost podman[77328]: 2025-12-02 08:24:19.123898362 +0000 UTC m=+0.127340862 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:24:19 localhost podman[77328]: unhealthy Dec 2 03:24:19 localhost systemd[1]: tmp-crun.WU4ieE.mount: Deactivated successfully. Dec 2 03:24:19 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:24:19 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 03:24:19 localhost podman[77329]: 2025-12-02 08:24:19.141344935 +0000 UTC m=+0.141367545 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:24:19 localhost podman[77329]: 2025-12-02 08:24:19.568779524 +0000 UTC m=+0.568802104 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 2 03:24:19 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:24:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:24:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:24:22 localhost podman[77375]: 2025-12-02 08:24:22.082984165 +0000 UTC m=+0.089298315 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z) Dec 2 03:24:22 localhost podman[77375]: 2025-12-02 08:24:22.160912384 +0000 UTC m=+0.167226554 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:24:22 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:24:22 localhost podman[77376]: 2025-12-02 08:24:22.180539571 +0000 UTC m=+0.183469871 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_controller, release=1761123044, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:24:22 localhost podman[77376]: 2025-12-02 08:24:22.231948902 +0000 UTC m=+0.234879182 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:24:22 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:24:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:24:28 localhost podman[77422]: 2025-12-02 08:24:28.068043795 +0000 UTC m=+0.073832631 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:24:28 localhost podman[77422]: 2025-12-02 08:24:28.100120078 +0000 UTC m=+0.105908914 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Dec 2 03:24:28 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:24:30 localhost systemd[1]: tmp-crun.edAwO9.mount: Deactivated successfully. Dec 2 03:24:30 localhost podman[77442]: 2025-12-02 08:24:30.0828153 +0000 UTC m=+0.086463641 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:24:30 localhost podman[77442]: 2025-12-02 08:24:30.119434146 +0000 UTC m=+0.123082437 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, container_name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 2 03:24:30 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:24:43 localhost podman[77538]: 2025-12-02 08:24:43.481576298 +0000 UTC m=+0.082358930 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Dec 2 03:24:43 localhost podman[77538]: 2025-12-02 08:24:43.643939689 +0000 UTC m=+0.244722321 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:24:43 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:24:48 localhost systemd[1]: tmp-crun.funkNb.mount: Deactivated successfully. Dec 2 03:24:48 localhost podman[77571]: 2025-12-02 08:24:48.084050417 +0000 UTC m=+0.089968145 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute) Dec 2 03:24:48 localhost podman[77571]: 2025-12-02 08:24:48.10423226 +0000 UTC m=+0.110149988 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:24:48 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:24:48 localhost podman[77572]: 2025-12-02 08:24:48.118576601 +0000 UTC m=+0.122899461 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:24:48 localhost podman[77572]: 2025-12-02 08:24:48.139764964 +0000 UTC m=+0.144087814 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:24:48 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:24:48 localhost podman[77570]: 2025-12-02 08:24:48.224153473 +0000 UTC m=+0.229093572 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:24:48 localhost podman[77570]: 2025-12-02 08:24:48.227478061 +0000 UTC m=+0.232418150 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, io.openshift.expose-services=, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:24:48 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:24:49 localhost systemd[1]: tmp-crun.u6b95J.mount: Deactivated successfully. Dec 2 03:24:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:24:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:24:50 localhost podman[77709]: 2025-12-02 08:24:50.076820238 +0000 UTC m=+0.068255607 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container) Dec 2 03:24:50 localhost systemd[1]: tmp-crun.XPJMnI.mount: Deactivated successfully. Dec 2 03:24:50 localhost podman[77708]: 2025-12-02 08:24:50.139777778 +0000 UTC m=+0.135571874 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 2 03:24:50 localhost podman[77708]: 2025-12-02 08:24:50.18987863 +0000 UTC m=+0.185672696 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:24:50 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:24:50 localhost podman[77709]: 2025-12-02 08:24:50.452077383 +0000 UTC m=+0.443512702 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:24:50 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:24:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:24:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:24:53 localhost podman[77779]: 2025-12-02 08:24:53.073254838 +0000 UTC m=+0.079585759 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true) Dec 2 03:24:53 localhost podman[77780]: 2025-12-02 08:24:53.129233653 +0000 UTC m=+0.131358890 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Dec 2 03:24:53 localhost podman[77779]: 2025-12-02 08:24:53.163784568 +0000 UTC m=+0.170115499 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:24:53 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:24:53 localhost podman[77780]: 2025-12-02 08:24:53.176938785 +0000 UTC m=+0.179064042 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:24:53 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: libpod-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:24:58 localhost podman[77824]: 2025-12-02 08:24:58.669488005 +0000 UTC m=+0.054202324 container died 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_wait_for_compute_service, batch=17.1_20251118.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team) Dec 2 03:24:58 localhost systemd[1]: tmp-crun.Z3s9Cx.mount: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf-userdata-shm.mount: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully. Dec 2 03:24:58 localhost podman[77824]: 2025-12-02 08:24:58.705630317 +0000 UTC m=+0.090344566 container cleanup 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:24:58 localhost systemd[1]: libpod-conmon-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope: Deactivated successfully. Dec 2 03:24:58 localhost python3[75858]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:24:58 localhost podman[77825]: 2025-12-02 08:24:58.750747983 +0000 UTC m=+0.127129447 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z) Dec 2 03:24:58 localhost podman[77825]: 2025-12-02 08:24:58.760446027 +0000 UTC m=+0.136827531 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Dec 2 03:24:58 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:24:59 localhost python3[77894]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:24:59 localhost python3[77910]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:25:00 localhost systemd[1]: tmp-crun.rAbHLs.mount: Deactivated successfully. Dec 2 03:25:00 localhost podman[77971]: 2025-12-02 08:25:00.240580257 +0000 UTC m=+0.065032712 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:25:00 localhost podman[77971]: 2025-12-02 08:25:00.250349774 +0000 UTC m=+0.074802219 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, container_name=iscsid) Dec 2 03:25:00 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:25:00 localhost python3[77972]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663899.7048783-118037-46971728796331/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:00 localhost python3[78007]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:25:00 localhost systemd[1]: Reloading. Dec 2 03:25:00 localhost systemd-rc-local-generator[78024]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:00 localhost systemd-sysv-generator[78031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:01 localhost python3[78059]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:25:01 localhost systemd[1]: Reloading. Dec 2 03:25:01 localhost systemd-rc-local-generator[78089]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:01 localhost systemd-sysv-generator[78092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:02 localhost systemd[1]: Starting nova_compute container... Dec 2 03:25:02 localhost tripleo-start-podman-container[78099]: Creating additional drop-in dependency for "nova_compute" (6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e) Dec 2 03:25:02 localhost systemd[1]: Reloading. Dec 2 03:25:02 localhost systemd-rc-local-generator[78155]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:02 localhost systemd-sysv-generator[78160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:02 localhost systemd[1]: Started nova_compute container. Dec 2 03:25:03 localhost python3[78196]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:04 localhost python3[78317]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005541914 step=5 update_config_hash_only=False Dec 2 03:25:05 localhost python3[78333]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:05 localhost python3[78349]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:25:13 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:25:14 localhost recover_tripleo_nova_virtqemud[78352]: 61907 Dec 2 03:25:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:25:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:25:14 localhost podman[78350]: 2025-12-02 08:25:14.08408584 +0000 UTC m=+0.082659299 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:25:14 localhost podman[78350]: 2025-12-02 08:25:14.307863565 +0000 UTC m=+0.306437024 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12) Dec 2 03:25:14 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:25:19 localhost podman[78382]: 2025-12-02 08:25:19.07086032 +0000 UTC m=+0.071479651 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_compute) Dec 2 03:25:19 localhost podman[78381]: 2025-12-02 08:25:19.143744551 +0000 UTC m=+0.142842928 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:25:19 localhost podman[78382]: 2025-12-02 08:25:19.151434327 +0000 UTC m=+0.152053638 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:25:19 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:25:19 localhost podman[78381]: 2025-12-02 08:25:19.180889233 +0000 UTC m=+0.179987600 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:25:19 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:25:19 localhost podman[78383]: 2025-12-02 08:25:19.106980071 +0000 UTC m=+0.099821113 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:25:19 localhost podman[78383]: 2025-12-02 08:25:19.238626679 +0000 UTC m=+0.231467741 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 2 03:25:19 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:25:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:25:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:25:21 localhost podman[78456]: 2025-12-02 08:25:21.077818408 +0000 UTC m=+0.083135744 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=) Dec 2 03:25:21 localhost podman[78457]: 2025-12-02 08:25:21.129791575 +0000 UTC m=+0.133140853 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:25:21 localhost podman[78456]: 2025-12-02 08:25:21.135155082 +0000 UTC m=+0.140472358 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Dec 2 03:25:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:25:21 localhost podman[78457]: 2025-12-02 08:25:21.520945028 +0000 UTC m=+0.524294326 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:25:21 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:25:24 localhost podman[78505]: 2025-12-02 08:25:24.08858197 +0000 UTC m=+0.089817590 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true) Dec 2 03:25:24 localhost systemd[1]: tmp-crun.NGNuyN.mount: Deactivated successfully. Dec 2 03:25:24 localhost podman[78506]: 2025-12-02 08:25:24.1566204 +0000 UTC m=+0.153936804 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4) Dec 2 03:25:24 localhost podman[78506]: 2025-12-02 08:25:24.206792314 +0000 UTC m=+0.204108718 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team) Dec 2 03:25:24 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:25:24 localhost podman[78505]: 2025-12-02 08:25:24.258838892 +0000 UTC m=+0.260074482 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 2 03:25:24 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:25:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:25:29 localhost systemd[1]: tmp-crun.gytKHF.mount: Deactivated successfully. Dec 2 03:25:29 localhost podman[78554]: 2025-12-02 08:25:29.090602188 +0000 UTC m=+0.092187600 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:25:29 localhost podman[78554]: 2025-12-02 08:25:29.108299897 +0000 UTC m=+0.109885339 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:25:29 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:25:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:25:31 localhost podman[78574]: 2025-12-02 08:25:31.090703243 +0000 UTC m=+0.090870380 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:25:31 localhost podman[78574]: 2025-12-02 08:25:31.127583947 +0000 UTC m=+0.127751064 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., container_name=iscsid) Dec 2 03:25:31 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:25:35 localhost sshd[78592]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:25:35 localhost systemd-logind[760]: New session 33 of user zuul. Dec 2 03:25:35 localhost systemd[1]: Started Session 33 of User zuul. Dec 2 03:25:36 localhost python3[78701]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 03:25:42 localhost sshd[78888]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:25:43 localhost python3[78966]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Dec 2 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:25:44 localhost podman[79046]: 2025-12-02 08:25:44.924895604 +0000 UTC m=+0.068137072 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:25:45 localhost podman[79046]: 2025-12-02 08:25:45.125751856 +0000 UTC m=+0.268993324 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:25:45 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:25:48 localhost python3[79185]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Dec 2 03:25:48 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Dec 2 03:25:48 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Dec 2 03:25:48 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 03:25:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 03:25:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 03:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:25:50 localhost systemd[1]: tmp-crun.RV7u4j.mount: Deactivated successfully. Dec 2 03:25:50 localhost podman[79234]: 2025-12-02 08:25:50.060114146 +0000 UTC m=+0.060225240 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:25:50 localhost podman[79234]: 2025-12-02 08:25:50.072228682 +0000 UTC m=+0.072339786 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:25:50 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:25:50 localhost systemd[1]: tmp-crun.5Ru3ID.mount: Deactivated successfully. Dec 2 03:25:50 localhost podman[79236]: 2025-12-02 08:25:50.124464156 +0000 UTC m=+0.118931095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, release=1761123044, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com) Dec 2 03:25:50 localhost podman[79235]: 2025-12-02 08:25:50.173203739 +0000 UTC m=+0.170469740 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 2 03:25:50 localhost podman[79236]: 2025-12-02 08:25:50.179977468 +0000 UTC m=+0.174444367 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:25:50 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:25:50 localhost podman[79235]: 2025-12-02 08:25:50.226842025 +0000 UTC m=+0.224108016 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:25:50 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:25:51 localhost systemd[1]: tmp-crun.eMZztZ.mount: Deactivated successfully. Dec 2 03:25:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:25:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:25:52 localhost podman[79305]: 2025-12-02 08:25:52.072949826 +0000 UTC m=+0.072416649 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:25:52 localhost podman[79305]: 2025-12-02 08:25:52.153467592 +0000 UTC m=+0.152934475 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:25:52 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:25:52 localhost podman[79306]: 2025-12-02 08:25:52.153860244 +0000 UTC m=+0.147926088 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:25:52 localhost podman[79306]: 2025-12-02 08:25:52.495908214 +0000 UTC m=+0.489974068 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Dec 2 03:25:52 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:25:55 localhost systemd[1]: tmp-crun.LBmxju.mount: Deactivated successfully. Dec 2 03:25:55 localhost podman[79351]: 2025-12-02 08:25:55.077336169 +0000 UTC m=+0.082475254 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:25:55 localhost systemd[1]: tmp-crun.8v8p43.mount: Deactivated successfully. Dec 2 03:25:55 localhost podman[79352]: 2025-12-02 08:25:55.142027211 +0000 UTC m=+0.138425219 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:25:55 localhost podman[79351]: 2025-12-02 08:25:55.166989103 +0000 UTC m=+0.172128198 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 2 03:25:55 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:25:55 localhost podman[79352]: 2025-12-02 08:25:55.197671645 +0000 UTC m=+0.194069643 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:25:55 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:25:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:26:00 localhost podman[79400]: 2025-12-02 08:26:00.076255926 +0000 UTC m=+0.084092632 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, container_name=collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:26:00 localhost podman[79400]: 2025-12-02 08:26:00.112992045 +0000 UTC m=+0.120828691 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Dec 2 03:26:00 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:26:02 localhost podman[79420]: 2025-12-02 08:26:02.045543756 +0000 UTC m=+0.053881274 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 2 03:26:02 localhost podman[79420]: 2025-12-02 08:26:02.079902565 +0000 UTC m=+0.088240073 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:26:02 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:26:16 localhost podman[79440]: 2025-12-02 08:26:16.072002369 +0000 UTC m=+0.079347523 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:26:16 localhost podman[79440]: 2025-12-02 08:26:16.343953689 +0000 UTC m=+0.351298853 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Dec 2 03:26:16 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:26:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:26:21 localhost recover_tripleo_nova_virtqemud[79483]: 61907 Dec 2 03:26:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:26:21 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:26:21 localhost podman[79469]: 2025-12-02 08:26:21.071396957 +0000 UTC m=+0.077278881 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 03:26:21 localhost podman[79469]: 2025-12-02 08:26:21.106047926 +0000 UTC m=+0.111929850 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:26:21 localhost podman[79470]: 2025-12-02 08:26:21.13239611 +0000 UTC m=+0.136643696 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:26:21 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:26:21 localhost podman[79471]: 2025-12-02 08:26:21.206829367 +0000 UTC m=+0.208980121 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:26:21 localhost podman[79470]: 2025-12-02 08:26:21.219924301 +0000 UTC m=+0.224171847 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public) Dec 2 03:26:21 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:26:21 localhost podman[79471]: 2025-12-02 08:26:21.234809629 +0000 UTC m=+0.236960393 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:26:21 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:26:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:26:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:26:23 localhost podman[79546]: 2025-12-02 08:26:23.058583295 +0000 UTC m=+0.067851435 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute) Dec 2 03:26:23 localhost podman[79547]: 2025-12-02 08:26:23.108896533 +0000 UTC m=+0.115392561 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 2 03:26:23 localhost podman[79546]: 2025-12-02 08:26:23.163190228 +0000 UTC m=+0.172458418 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible) Dec 2 03:26:23 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:26:23 localhost podman[79547]: 2025-12-02 08:26:23.421203659 +0000 UTC m=+0.427699697 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 2 03:26:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:26:26 localhost systemd[1]: tmp-crun.xU5goo.mount: Deactivated successfully. Dec 2 03:26:26 localhost podman[79594]: 2025-12-02 08:26:26.086720247 +0000 UTC m=+0.093479788 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64) Dec 2 03:26:26 localhost podman[79594]: 2025-12-02 08:26:26.123802966 +0000 UTC m=+0.130562487 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:26:26 localhost systemd[1]: tmp-crun.09tdiy.mount: Deactivated successfully. Dec 2 03:26:26 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:26:26 localhost podman[79595]: 2025-12-02 08:26:26.133538282 +0000 UTC m=+0.137351237 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:26:26 localhost podman[79595]: 2025-12-02 08:26:26.213858042 +0000 UTC m=+0.217671027 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 2 03:26:26 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:26:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:26:31 localhost podman[79641]: 2025-12-02 08:26:31.081721468 +0000 UTC m=+0.073285864 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Dec 2 03:26:31 localhost podman[79641]: 2025-12-02 08:26:31.091390012 +0000 UTC m=+0.082954328 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:26:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:26:33 localhost podman[79660]: 2025-12-02 08:26:33.062571347 +0000 UTC m=+0.068804923 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:26:33 localhost podman[79660]: 2025-12-02 08:26:33.070414518 +0000 UTC m=+0.076648104 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:26:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:26:47 localhost systemd[1]: tmp-crun.YITxTk.mount: Deactivated successfully. Dec 2 03:26:47 localhost podman[79743]: 2025-12-02 08:26:47.079481559 +0000 UTC m=+0.082774033 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step1) Dec 2 03:26:47 localhost podman[79743]: 2025-12-02 08:26:47.328898517 +0000 UTC m=+0.332190951 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:26:47 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:26:47 localhost systemd[1]: session-33.scope: Deactivated successfully. Dec 2 03:26:47 localhost systemd[1]: session-33.scope: Consumed 6.019s CPU time. Dec 2 03:26:47 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Dec 2 03:26:47 localhost systemd-logind[760]: Removed session 33. Dec 2 03:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:26:52 localhost systemd[1]: tmp-crun.lGnM8Q.mount: Deactivated successfully. Dec 2 03:26:52 localhost podman[79834]: 2025-12-02 08:26:52.050227209 +0000 UTC m=+0.056872582 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4) Dec 2 03:26:52 localhost podman[79833]: 2025-12-02 08:26:52.069339371 +0000 UTC m=+0.075261142 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 2 03:26:52 localhost podman[79833]: 2025-12-02 08:26:52.076702147 +0000 UTC m=+0.082623978 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 2 03:26:52 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:26:52 localhost podman[79834]: 2025-12-02 08:26:52.134148515 +0000 UTC m=+0.140793908 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:26:52 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:26:52 localhost podman[79835]: 2025-12-02 08:26:52.138656657 +0000 UTC m=+0.139275673 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:26:52 localhost podman[79835]: 2025-12-02 08:26:52.221879762 +0000 UTC m=+0.222498688 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:26:52 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:26:53 localhost systemd[1]: tmp-crun.3w2wqi.mount: Deactivated successfully. Dec 2 03:26:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:26:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:26:54 localhost systemd[1]: tmp-crun.S1Je7S.mount: Deactivated successfully. Dec 2 03:26:54 localhost podman[79907]: 2025-12-02 08:26:54.084104118 +0000 UTC m=+0.083821784 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com) Dec 2 03:26:54 localhost systemd[1]: tmp-crun.CgEWQG.mount: Deactivated successfully. Dec 2 03:26:54 localhost podman[79906]: 2025-12-02 08:26:54.135999772 +0000 UTC m=+0.138028706 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:26:54 localhost podman[79906]: 2025-12-02 08:26:54.185840586 +0000 UTC m=+0.187869520 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_compute, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 2 03:26:54 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:26:54 localhost podman[79907]: 2025-12-02 08:26:54.488830789 +0000 UTC m=+0.488548435 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Dec 2 03:26:54 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:26:57 localhost systemd[1]: tmp-crun.nd9c3N.mount: Deactivated successfully. Dec 2 03:26:57 localhost podman[79955]: 2025-12-02 08:26:57.104262635 +0000 UTC m=+0.103858542 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:26:57 localhost podman[79954]: 2025-12-02 08:26:57.063487726 +0000 UTC m=+0.069361088 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:26:57 localhost podman[79954]: 2025-12-02 08:26:57.143796886 +0000 UTC m=+0.149670258 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git) Dec 2 03:26:57 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:26:57 localhost podman[79955]: 2025-12-02 08:26:57.202877862 +0000 UTC m=+0.202473789 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git) Dec 2 03:26:57 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:27:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:27:02 localhost systemd[1]: tmp-crun.k34diR.mount: Deactivated successfully. Dec 2 03:27:02 localhost podman[80000]: 2025-12-02 08:27:02.046831696 +0000 UTC m=+0.059259013 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 03:27:02 localhost podman[80000]: 2025-12-02 08:27:02.051497213 +0000 UTC m=+0.063924520 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Dec 2 03:27:02 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:27:02 localhost sshd[80021]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:27:02 localhost systemd-logind[760]: New session 34 of user zuul. Dec 2 03:27:02 localhost systemd[1]: Started Session 34 of User zuul. Dec 2 03:27:03 localhost python3[80040]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 03:27:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:27:04 localhost systemd[1]: tmp-crun.WxDWdy.mount: Deactivated successfully. Dec 2 03:27:04 localhost podman[80042]: 2025-12-02 08:27:04.104240426 +0000 UTC m=+0.101646928 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 03:27:04 localhost podman[80042]: 2025-12-02 08:27:04.120821673 +0000 UTC m=+0.118228185 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044) Dec 2 03:27:04 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:27:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:27:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4399 writes, 20K keys, 4399 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4399 writes, 504 syncs, 8.73 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:27:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:27:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 5262 writes, 23K keys, 5262 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5262 writes, 560 syncs, 9.40 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:27:18 localhost podman[80060]: 2025-12-02 08:27:18.075327612 +0000 UTC m=+0.076559241 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:27:18 localhost podman[80060]: 2025-12-02 08:27:18.299514219 +0000 UTC m=+0.300745828 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:27:18 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:27:23 localhost podman[80090]: 2025-12-02 08:27:23.086519475 +0000 UTC m=+0.083836349 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:27:23 localhost podman[80089]: 2025-12-02 08:27:23.062420802 +0000 UTC m=+0.068511630 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:27:23 localhost systemd[1]: tmp-crun.paoFkS.mount: Deactivated successfully. Dec 2 03:27:23 localhost podman[80091]: 2025-12-02 08:27:23.125669618 +0000 UTC m=+0.123026774 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible) Dec 2 03:27:23 localhost podman[80090]: 2025-12-02 08:27:23.1369289 +0000 UTC m=+0.134245794 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, version=17.1.12, architecture=x86_64) Dec 2 03:27:23 localhost podman[80089]: 2025-12-02 08:27:23.146969153 +0000 UTC m=+0.153059971 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:27:23 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:27:23 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:27:23 localhost podman[80091]: 2025-12-02 08:27:23.200949469 +0000 UTC m=+0.198306595 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:27:23 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:27:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:27:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:27:25 localhost systemd[1]: tmp-crun.60G0YW.mount: Deactivated successfully. Dec 2 03:27:25 localhost podman[80158]: 2025-12-02 08:27:25.076700922 +0000 UTC m=+0.081944970 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:27:25 localhost podman[80159]: 2025-12-02 08:27:25.127365544 +0000 UTC m=+0.126247123 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:27:25 localhost podman[80158]: 2025-12-02 08:27:25.152721256 +0000 UTC m=+0.157965364 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, vcs-type=git) Dec 2 03:27:25 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:27:25 localhost podman[80159]: 2025-12-02 08:27:25.478887265 +0000 UTC m=+0.477768844 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:27:25 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:27:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:27:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:27:28 localhost podman[80207]: 2025-12-02 08:27:28.067202095 +0000 UTC m=+0.074221219 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:27:28 localhost systemd[1]: tmp-crun.2QO5L8.mount: Deactivated successfully. Dec 2 03:27:28 localhost podman[80207]: 2025-12-02 08:27:28.137694307 +0000 UTC m=+0.144713371 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:27:28 localhost podman[80208]: 2025-12-02 08:27:28.140250377 +0000 UTC m=+0.143289776 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:27:28 localhost podman[80208]: 2025-12-02 08:27:28.166829857 +0000 UTC m=+0.169869216 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible) Dec 2 03:27:28 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:27:28 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:27:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:27:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:27:32 localhost recover_tripleo_nova_virtqemud[80274]: 61907 Dec 2 03:27:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:27:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:27:32 localhost systemd[1]: tmp-crun.Q33nmP.mount: Deactivated successfully. Dec 2 03:27:32 localhost podman[80272]: 2025-12-02 08:27:32.593741778 +0000 UTC m=+0.110912655 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-type=git) Dec 2 03:27:32 localhost podman[80272]: 2025-12-02 08:27:32.628855075 +0000 UTC m=+0.146025942 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Dec 2 03:27:32 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:27:32 localhost python3[80271]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 03:27:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:27:35 localhost podman[80295]: 2025-12-02 08:27:35.079782784 +0000 UTC m=+0.081048363 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:27:35 localhost podman[80295]: 2025-12-02 08:27:35.088444965 +0000 UTC m=+0.089710514 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 2 03:27:35 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:27:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:27:36 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 03:27:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:27:36 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 03:27:36 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 03:27:36 localhost systemd[1]: run-r8aad0ce0a02f400091ab72c1ae4e2496.service: Deactivated successfully. Dec 2 03:27:36 localhost systemd[1]: run-r1a9de5c2cb8c47c4aac0be42b57b46ab.service: Deactivated successfully. Dec 2 03:27:46 localhost sshd[80463]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:27:48 localhost podman[80499]: 2025-12-02 08:27:48.823783059 +0000 UTC m=+0.079781287 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:27:49 localhost podman[80499]: 2025-12-02 08:27:49.028658276 +0000 UTC m=+0.284656554 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:27:49 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:27:54 localhost podman[80670]: 2025-12-02 08:27:54.090020078 +0000 UTC m=+0.086244965 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:27:54 localhost podman[80670]: 2025-12-02 08:27:54.123906479 +0000 UTC m=+0.120131316 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:11:48Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:27:54 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80671]: 2025-12-02 08:27:54.204490056 +0000 UTC m=+0.192707501 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:27:54 localhost systemd[1]: tmp-crun.JiBZCI.mount: Deactivated successfully. Dec 2 03:27:54 localhost podman[80669]: 2025-12-02 08:27:54.264707047 +0000 UTC m=+0.261041217 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:27:54 localhost podman[80669]: 2025-12-02 08:27:54.274797208 +0000 UTC m=+0.271131418 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 03:27:54 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80671]: 2025-12-02 08:27:54.319087778 +0000 UTC m=+0.307305163 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:27:54 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:27:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:27:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:27:56 localhost systemd[1]: tmp-crun.6pjI6v.mount: Deactivated successfully. Dec 2 03:27:56 localhost podman[80741]: 2025-12-02 08:27:56.095638867 +0000 UTC m=+0.089930957 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:27:56 localhost podman[80742]: 2025-12-02 08:27:56.079420847 +0000 UTC m=+0.072165465 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Dec 2 03:27:56 localhost podman[80741]: 2025-12-02 08:27:56.143729052 +0000 UTC m=+0.138021122 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:27:56 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:27:56 localhost podman[80742]: 2025-12-02 08:27:56.456124874 +0000 UTC m=+0.448869542 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:27:56 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:27:59 localhost systemd[1]: tmp-crun.PwsfwB.mount: Deactivated successfully. Dec 2 03:27:59 localhost podman[80791]: 2025-12-02 08:27:59.095724245 +0000 UTC m=+0.094347111 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:27:59 localhost podman[80790]: 2025-12-02 08:27:59.06854188 +0000 UTC m=+0.077391590 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:27:59 localhost podman[80790]: 2025-12-02 08:27:59.151979046 +0000 UTC m=+0.160828776 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:27:59 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:27:59 localhost podman[80791]: 2025-12-02 08:27:59.172444794 +0000 UTC m=+0.171067650 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:27:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:28:03 localhost podman[80838]: 2025-12-02 08:28:03.063062074 +0000 UTC m=+0.072347320 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, release=1761123044, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:28:03 localhost podman[80838]: 2025-12-02 08:28:03.077841414 +0000 UTC m=+0.087126830 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 2 03:28:03 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:28:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:28:06 localhost systemd[1]: tmp-crun.4W15OX.mount: Deactivated successfully. Dec 2 03:28:06 localhost podman[80857]: 2025-12-02 08:28:06.075861743 +0000 UTC m=+0.080764483 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, version=17.1.12) Dec 2 03:28:06 localhost podman[80857]: 2025-12-02 08:28:06.109808685 +0000 UTC m=+0.114711425 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:28:06 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:28:20 localhost podman[80874]: 2025-12-02 08:28:20.078539559 +0000 UTC m=+0.083727896 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:28:20 localhost podman[80874]: 2025-12-02 08:28:20.266318452 +0000 UTC m=+0.271506719 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:28:20 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:28:24 localhost python3[80918]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:28:25 localhost podman[80921]: 2025-12-02 08:28:25.091107846 +0000 UTC m=+0.088003204 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:28:25 localhost podman[80921]: 2025-12-02 08:28:25.124954745 +0000 UTC m=+0.121850113 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-type=git) Dec 2 03:28:25 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:28:25 localhost podman[80923]: 2025-12-02 08:28:25.133582795 +0000 UTC m=+0.128718485 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:28:25 localhost podman[80923]: 2025-12-02 08:28:25.215992693 +0000 UTC m=+0.211128363 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:28:25 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:28:25 localhost podman[80922]: 2025-12-02 08:28:25.187144781 +0000 UTC m=+0.185048778 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 03:28:25 localhost podman[80922]: 2025-12-02 08:28:25.26993854 +0000 UTC m=+0.267842467 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:28:25 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:28:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:28:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:28:27 localhost podman[81112]: 2025-12-02 08:28:27.067691429 +0000 UTC m=+0.072989517 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vcs-type=git, release=1761123044, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:28:27 localhost systemd[1]: tmp-crun.P5uJ2g.mount: Deactivated successfully. Dec 2 03:28:27 localhost podman[81111]: 2025-12-02 08:28:27.141476507 +0000 UTC m=+0.146999911 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute) Dec 2 03:28:27 localhost podman[81111]: 2025-12-02 08:28:27.197630526 +0000 UTC m=+0.203153970 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute) Dec 2 03:28:27 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:28:27 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:28:27 localhost podman[81112]: 2025-12-02 08:28:27.420710099 +0000 UTC m=+0.426008117 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4) Dec 2 03:28:27 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:28:30 localhost systemd[1]: tmp-crun.3zPgUp.mount: Deactivated successfully. Dec 2 03:28:30 localhost podman[81171]: 2025-12-02 08:28:30.092046339 +0000 UTC m=+0.095343868 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, release=1761123044, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public) Dec 2 03:28:30 localhost systemd[1]: tmp-crun.jNOmuS.mount: Deactivated successfully. Dec 2 03:28:30 localhost podman[81171]: 2025-12-02 08:28:30.148510987 +0000 UTC m=+0.151808526 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64) Dec 2 03:28:30 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:28:30 localhost podman[81170]: 2025-12-02 08:28:30.153936937 +0000 UTC m=+0.157989636 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 2 03:28:30 localhost podman[81170]: 2025-12-02 08:28:30.234500394 +0000 UTC m=+0.238553063 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:28:30 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:28:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:28:34 localhost systemd[1]: tmp-crun.texLSm.mount: Deactivated successfully. Dec 2 03:28:34 localhost podman[81278]: 2025-12-02 08:28:34.088402575 +0000 UTC m=+0.090348079 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:28:34 localhost podman[81278]: 2025-12-02 08:28:34.102966639 +0000 UTC m=+0.104912163 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:28:34 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:28:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:28:37 localhost podman[81298]: 2025-12-02 08:28:37.060616188 +0000 UTC m=+0.068048330 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Dec 2 03:28:37 localhost podman[81298]: 2025-12-02 08:28:37.074941066 +0000 UTC m=+0.082373228 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:28:37 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:28:42 localhost sshd[81317]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:28:49 localhost sshd[81318]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:28:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:28:51 localhost podman[81354]: 2025-12-02 08:28:51.033417554 +0000 UTC m=+0.076034982 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:28:51 localhost podman[81354]: 2025-12-02 08:28:51.283857016 +0000 UTC m=+0.326474404 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, release=1761123044, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:28:51 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:28:56 localhost podman[81470]: 2025-12-02 08:28:56.055297678 +0000 UTC m=+0.060340866 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:28:56 localhost podman[81470]: 2025-12-02 08:28:56.092771939 +0000 UTC m=+0.097815127 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 03:28:56 localhost systemd[1]: tmp-crun.4FQvta.mount: Deactivated successfully. Dec 2 03:28:56 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:28:56 localhost podman[81472]: 2025-12-02 08:28:56.111740355 +0000 UTC m=+0.112702960 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-11-19T00:12:45Z, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:28:56 localhost podman[81472]: 2025-12-02 08:28:56.130625759 +0000 UTC m=+0.131588374 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Dec 2 03:28:56 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:28:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:28:56 localhost podman[81471]: 2025-12-02 08:28:56.2405238 +0000 UTC m=+0.242523853 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:28:56 localhost recover_tripleo_nova_virtqemud[81535]: 61907 Dec 2 03:28:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:28:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:28:56 localhost podman[81471]: 2025-12-02 08:28:56.269910266 +0000 UTC m=+0.271910309 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:28:56 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:28:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:28:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:28:58 localhost systemd[1]: tmp-crun.jKWj5K.mount: Deactivated successfully. Dec 2 03:28:58 localhost podman[81546]: 2025-12-02 08:28:58.088216416 +0000 UTC m=+0.085792723 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.12, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:28:58 localhost podman[81545]: 2025-12-02 08:28:58.138679157 +0000 UTC m=+0.137980342 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 03:28:58 localhost podman[81545]: 2025-12-02 08:28:58.15898844 +0000 UTC m=+0.158289695 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:28:58 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:28:58 localhost podman[81546]: 2025-12-02 08:28:58.454846124 +0000 UTC m=+0.452422431 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 03:28:58 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:28:59 localhost systemd[1]: tmp-crun.w0UmK4.mount: Deactivated successfully. Dec 2 03:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:29:01 localhost systemd[1]: tmp-crun.CAYpmo.mount: Deactivated successfully. Dec 2 03:29:01 localhost podman[81594]: 2025-12-02 08:29:01.072554355 +0000 UTC m=+0.083884570 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, url=https://www.redhat.com, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:29:01 localhost podman[81595]: 2025-12-02 08:29:01.127333046 +0000 UTC m=+0.130525855 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:29:01 localhost podman[81594]: 2025-12-02 08:29:01.133942389 +0000 UTC m=+0.145272604 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, release=1761123044, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:29:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:29:01 localhost podman[81595]: 2025-12-02 08:29:01.147057104 +0000 UTC m=+0.150249893 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:29:01 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:29:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:29:05 localhost podman[81642]: 2025-12-02 08:29:05.070212386 +0000 UTC m=+0.066620389 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:29:05 localhost podman[81642]: 2025-12-02 08:29:05.082723034 +0000 UTC m=+0.079131027 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:29:05 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:29:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:29:08 localhost podman[81664]: 2025-12-02 08:29:08.053966269 +0000 UTC m=+0.064629905 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git) Dec 2 03:29:08 localhost podman[81664]: 2025-12-02 08:29:08.060734208 +0000 UTC m=+0.071397844 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:29:08 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:29:19 localhost python3[81698]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:29:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:29:22 localhost systemd[1]: tmp-crun.Bd8v4W.mount: Deactivated successfully. Dec 2 03:29:22 localhost podman[81702]: 2025-12-02 08:29:22.073914324 +0000 UTC m=+0.077416261 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:29:22 localhost podman[81702]: 2025-12-02 08:29:22.272676732 +0000 UTC m=+0.276178749 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:29:22 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:29:23 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:29:23 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:29:27 localhost systemd[1]: tmp-crun.wZBgxq.mount: Deactivated successfully. Dec 2 03:29:27 localhost podman[81914]: 2025-12-02 08:29:27.085208745 +0000 UTC m=+0.086265916 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:29:27 localhost podman[81913]: 2025-12-02 08:29:27.127765566 +0000 UTC m=+0.129055634 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond) Dec 2 03:29:27 localhost podman[81913]: 2025-12-02 08:29:27.133884966 +0000 UTC m=+0.135175024 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:29:27 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:29:27 localhost podman[81915]: 2025-12-02 08:29:27.18194917 +0000 UTC m=+0.181603742 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=) Dec 2 03:29:27 localhost podman[81915]: 2025-12-02 08:29:27.21471972 +0000 UTC m=+0.214374312 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, managed_by=tripleo_ansible) Dec 2 03:29:27 localhost podman[81914]: 2025-12-02 08:29:27.212392395 +0000 UTC m=+0.213449556 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:29:27 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:29:27 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:29:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:29:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:29:29 localhost podman[81984]: 2025-12-02 08:29:29.057293723 +0000 UTC m=+0.064811321 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1) Dec 2 03:29:29 localhost podman[81985]: 2025-12-02 08:29:29.098307612 +0000 UTC m=+0.102774725 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z) Dec 2 03:29:29 localhost podman[81984]: 2025-12-02 08:29:29.103165796 +0000 UTC m=+0.110683394 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:29:29 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:29:29 localhost podman[81985]: 2025-12-02 08:29:29.446815616 +0000 UTC m=+0.451282749 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 2 03:29:29 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:29:32 localhost podman[82032]: 2025-12-02 08:29:32.060696782 +0000 UTC m=+0.070895439 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:29:32 localhost podman[82033]: 2025-12-02 08:29:32.129686607 +0000 UTC m=+0.136494410 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:29:32 localhost podman[82032]: 2025-12-02 08:29:32.145882826 +0000 UTC m=+0.156081473 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:29:32 localhost podman[82033]: 2025-12-02 08:29:32.155439052 +0000 UTC m=+0.162246865 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:29:32 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:29:32 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:29:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:29:36 localhost podman[82078]: 2025-12-02 08:29:36.076865987 +0000 UTC m=+0.084080915 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=collectd) Dec 2 03:29:36 localhost podman[82078]: 2025-12-02 08:29:36.087643527 +0000 UTC m=+0.094858445 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 03:29:36 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:29:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:29:39 localhost systemd[1]: tmp-crun.KflI28.mount: Deactivated successfully. Dec 2 03:29:39 localhost podman[82097]: 2025-12-02 08:29:39.088641119 +0000 UTC m=+0.090203615 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:29:39 localhost podman[82097]: 2025-12-02 08:29:39.123417824 +0000 UTC m=+0.124980300 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, version=17.1.12, container_name=iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Dec 2 03:29:39 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:29:42 localhost python3[82129]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 2 03:29:50 localhost sshd[82130]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:29:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:29:52 localhost podman[82192]: 2025-12-02 08:29:52.568904691 +0000 UTC m=+0.071948748 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:29:52 localhost podman[82192]: 2025-12-02 08:29:52.776026051 +0000 UTC m=+0.279070108 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, release=1761123044, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:29:52 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:29:53 localhost podman[82305]: 2025-12-02 08:29:53.505700188 +0000 UTC m=+0.106034865 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 03:29:53 localhost podman[82305]: 2025-12-02 08:29:53.648860872 +0000 UTC m=+0.249195489 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True) Dec 2 03:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:29:58 localhost podman[82447]: 2025-12-02 08:29:58.09628397 +0000 UTC m=+0.091711268 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:29:58 localhost podman[82447]: 2025-12-02 08:29:58.107986294 +0000 UTC m=+0.103413622 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64) Dec 2 03:29:58 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82449]: 2025-12-02 08:29:58.200151213 +0000 UTC m=+0.186931941 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:29:58 localhost podman[82449]: 2025-12-02 08:29:58.239920097 +0000 UTC m=+0.226700825 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:29:58 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82448]: 2025-12-02 08:29:58.263533883 +0000 UTC m=+0.251195425 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 2 03:29:58 localhost podman[82448]: 2025-12-02 08:29:58.320893295 +0000 UTC m=+0.308554787 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:29:58 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:29:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:29:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:30:00 localhost podman[82520]: 2025-12-02 08:30:00.070381824 +0000 UTC m=+0.074665334 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public) Dec 2 03:30:00 localhost podman[82521]: 2025-12-02 08:30:00.129656719 +0000 UTC m=+0.134334890 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4) Dec 2 03:30:00 localhost podman[82520]: 2025-12-02 08:30:00.148505652 +0000 UTC m=+0.152789192 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com) Dec 2 03:30:00 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:30:00 localhost podman[82521]: 2025-12-02 08:30:00.528610064 +0000 UTC m=+0.533288285 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container) Dec 2 03:30:00 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:30:03 localhost podman[82569]: 2025-12-02 08:30:03.088918813 +0000 UTC m=+0.084657501 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:30:03 localhost podman[82569]: 2025-12-02 08:30:03.149589647 +0000 UTC m=+0.145328295 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:03 localhost systemd[1]: tmp-crun.NYx2Zj.mount: Deactivated successfully. Dec 2 03:30:03 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:30:03 localhost podman[82570]: 2025-12-02 08:30:03.163146124 +0000 UTC m=+0.153843912 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_controller) Dec 2 03:30:03 localhost podman[82570]: 2025-12-02 08:30:03.189425113 +0000 UTC m=+0.180122901 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 03:30:03 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:30:07 localhost podman[82617]: 2025-12-02 08:30:07.068834731 +0000 UTC m=+0.074343385 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:30:07 localhost podman[82617]: 2025-12-02 08:30:07.080855215 +0000 UTC m=+0.086363829 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc.) Dec 2 03:30:07 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:30:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:30:10 localhost podman[82637]: 2025-12-02 08:30:10.078035481 +0000 UTC m=+0.085454794 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:30:10 localhost podman[82637]: 2025-12-02 08:30:10.112143158 +0000 UTC m=+0.119562501 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:30:10 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:30:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:30:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:30:23 localhost recover_tripleo_nova_virtqemud[82662]: 61907 Dec 2 03:30:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:30:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:30:23 localhost systemd[1]: tmp-crun.HhA59c.mount: Deactivated successfully. Dec 2 03:30:23 localhost podman[82655]: 2025-12-02 08:30:23.09057459 +0000 UTC m=+0.093997151 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 2 03:30:23 localhost podman[82655]: 2025-12-02 08:30:23.277975013 +0000 UTC m=+0.281397594 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:30:23 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:30:29 localhost systemd[1]: tmp-crun.4UfMhU.mount: Deactivated successfully. Dec 2 03:30:29 localhost podman[82686]: 2025-12-02 08:30:29.084962513 +0000 UTC m=+0.084915499 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, version=17.1.12) Dec 2 03:30:29 localhost podman[82686]: 2025-12-02 08:30:29.119645836 +0000 UTC m=+0.119598822 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:30:29 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:30:29 localhost podman[82688]: 2025-12-02 08:30:29.144173967 +0000 UTC m=+0.138587269 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:30:29 localhost podman[82687]: 2025-12-02 08:30:29.19867923 +0000 UTC m=+0.194603063 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:30:29 localhost podman[82688]: 2025-12-02 08:30:29.203025231 +0000 UTC m=+0.197438513 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:30:29 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:30:29 localhost podman[82687]: 2025-12-02 08:30:29.261978967 +0000 UTC m=+0.257902760 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:30:29 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:30:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:30:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:30:31 localhost podman[82759]: 2025-12-02 08:30:31.090055238 +0000 UTC m=+0.091638165 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc.) Dec 2 03:30:31 localhost systemd[1]: tmp-crun.UyrbCY.mount: Deactivated successfully. Dec 2 03:30:31 localhost podman[82760]: 2025-12-02 08:30:31.137403952 +0000 UTC m=+0.136784628 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:30:31 localhost podman[82759]: 2025-12-02 08:30:31.145539918 +0000 UTC m=+0.147122845 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:31 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:30:31 localhost podman[82760]: 2025-12-02 08:30:31.545243404 +0000 UTC m=+0.544624160 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:31 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:30:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:30:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:30:34 localhost systemd[1]: tmp-crun.Xrvt6R.mount: Deactivated successfully. Dec 2 03:30:34 localhost podman[82808]: 2025-12-02 08:30:34.080684212 +0000 UTC m=+0.090595776 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12) Dec 2 03:30:34 localhost podman[82809]: 2025-12-02 08:30:34.141606563 +0000 UTC m=+0.143029372 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:30:34 localhost podman[82808]: 2025-12-02 08:30:34.151821346 +0000 UTC m=+0.161732910 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:30:34 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:30:34 localhost podman[82809]: 2025-12-02 08:30:34.166578596 +0000 UTC m=+0.168001335 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.) Dec 2 03:30:34 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:30:38 localhost podman[82858]: 2025-12-02 08:30:38.080145533 +0000 UTC m=+0.083473939 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 2 03:30:38 localhost podman[82858]: 2025-12-02 08:30:38.117125689 +0000 UTC m=+0.120454155 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=1761123044, managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 2 03:30:38 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:30:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:30:41 localhost systemd[1]: tmp-crun.hTs03W.mount: Deactivated successfully. Dec 2 03:30:41 localhost podman[82879]: 2025-12-02 08:30:41.085269648 +0000 UTC m=+0.084033894 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:30:41 localhost podman[82879]: 2025-12-02 08:30:41.09327588 +0000 UTC m=+0.092040086 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:41 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:30:42 localhost systemd[1]: session-34.scope: Deactivated successfully. Dec 2 03:30:42 localhost systemd[1]: session-34.scope: Consumed 19.925s CPU time. Dec 2 03:30:42 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Dec 2 03:30:42 localhost systemd-logind[760]: Removed session 34. Dec 2 03:30:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:30:54 localhost systemd[1]: tmp-crun.qNiwER.mount: Deactivated successfully. Dec 2 03:30:54 localhost podman[82945]: 2025-12-02 08:30:54.099180782 +0000 UTC m=+0.096321743 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 2 03:30:54 localhost podman[82945]: 2025-12-02 08:30:54.321842964 +0000 UTC m=+0.318983865 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:30:54 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:31:00 localhost systemd[1]: tmp-crun.wCmoGQ.mount: Deactivated successfully. Dec 2 03:31:00 localhost podman[83051]: 2025-12-02 08:31:00.076018372 +0000 UTC m=+0.078211681 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:31:00 localhost podman[83050]: 2025-12-02 08:31:00.123381468 +0000 UTC m=+0.126447605 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond) Dec 2 03:31:00 localhost podman[83051]: 2025-12-02 08:31:00.177867105 +0000 UTC m=+0.180060434 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 03:31:00 localhost podman[83052]: 2025-12-02 08:31:00.184121669 +0000 UTC m=+0.182391037 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, tcib_managed=true) Dec 2 03:31:00 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83050]: 2025-12-02 08:31:00.215028506 +0000 UTC m=+0.218094573 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Dec 2 03:31:00 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83052]: 2025-12-02 08:31:00.244896571 +0000 UTC m=+0.243165959 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:31:00 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:31:02 localhost systemd[1]: tmp-crun.h2APQR.mount: Deactivated successfully. Dec 2 03:31:02 localhost podman[83125]: 2025-12-02 08:31:02.054195321 +0000 UTC m=+0.061291919 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Dec 2 03:31:02 localhost podman[83126]: 2025-12-02 08:31:02.066209302 +0000 UTC m=+0.069044508 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64) Dec 2 03:31:02 localhost podman[83125]: 2025-12-02 08:31:02.101876807 +0000 UTC m=+0.108973425 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public) Dec 2 03:31:02 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:31:02 localhost podman[83126]: 2025-12-02 08:31:02.404763093 +0000 UTC m=+0.407598279 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, container_name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:31:02 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:31:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:31:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:31:05 localhost podman[83175]: 2025-12-02 08:31:05.078305676 +0000 UTC m=+0.075774606 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public) Dec 2 03:31:05 localhost podman[83175]: 2025-12-02 08:31:05.135115865 +0000 UTC m=+0.132584755 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4) Dec 2 03:31:05 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:31:05 localhost podman[83174]: 2025-12-02 08:31:05.137242301 +0000 UTC m=+0.138677434 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12) Dec 2 03:31:05 localhost podman[83174]: 2025-12-02 08:31:05.220669664 +0000 UTC m=+0.222104817 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 2 03:31:05 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:31:09 localhost podman[83221]: 2025-12-02 08:31:09.064184365 +0000 UTC m=+0.070689879 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, release=1761123044) Dec 2 03:31:09 localhost podman[83221]: 2025-12-02 08:31:09.078907181 +0000 UTC m=+0.085412735 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Dec 2 03:31:09 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:31:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:31:12 localhost podman[83241]: 2025-12-02 08:31:12.078283242 +0000 UTC m=+0.085576721 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=iscsid, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:31:12 localhost podman[83241]: 2025-12-02 08:31:12.086342781 +0000 UTC m=+0.093636290 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:31:12 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:31:25 localhost podman[83260]: 2025-12-02 08:31:25.07025281 +0000 UTC m=+0.074306822 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:31:25 localhost podman[83260]: 2025-12-02 08:31:25.265981258 +0000 UTC m=+0.270035260 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:31:25 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:31:31 localhost podman[83289]: 2025-12-02 08:31:31.066645638 +0000 UTC m=+0.075693675 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:31:31 localhost podman[83289]: 2025-12-02 08:31:31.075668597 +0000 UTC m=+0.084716624 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Dec 2 03:31:31 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83291]: 2025-12-02 08:31:31.116856652 +0000 UTC m=+0.120124600 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:31:31 localhost podman[83290]: 2025-12-02 08:31:31.16233692 +0000 UTC m=+0.169807998 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:31:31 localhost podman[83291]: 2025-12-02 08:31:31.169969476 +0000 UTC m=+0.173237404 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:31:31 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83290]: 2025-12-02 08:31:31.184221477 +0000 UTC m=+0.191692525 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:31:31 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:31:33 localhost systemd[1]: tmp-crun.oM4GE0.mount: Deactivated successfully. Dec 2 03:31:33 localhost podman[83360]: 2025-12-02 08:31:33.081558433 +0000 UTC m=+0.082785853 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 2 03:31:33 localhost podman[83359]: 2025-12-02 08:31:33.137890577 +0000 UTC m=+0.143135932 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_compute, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:31:33 localhost podman[83359]: 2025-12-02 08:31:33.192376424 +0000 UTC m=+0.197621729 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 2 03:31:33 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:31:33 localhost podman[83360]: 2025-12-02 08:31:33.446074147 +0000 UTC m=+0.447301617 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git) Dec 2 03:31:33 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:31:34 localhost systemd[1]: tmp-crun.wuTW6d.mount: Deactivated successfully. Dec 2 03:31:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:31:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:31:36 localhost podman[83408]: 2025-12-02 08:31:36.090598032 +0000 UTC m=+0.087743797 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 2 03:31:36 localhost podman[83408]: 2025-12-02 08:31:36.136298907 +0000 UTC m=+0.133444722 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64) Dec 2 03:31:36 localhost systemd[1]: tmp-crun.A7Polt.mount: Deactivated successfully. Dec 2 03:31:36 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:31:36 localhost podman[83409]: 2025-12-02 08:31:36.146355428 +0000 UTC m=+0.141916954 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:31:36 localhost podman[83409]: 2025-12-02 08:31:36.229955096 +0000 UTC m=+0.225516652 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:31:36 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:31:39 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:31:40 localhost recover_tripleo_nova_virtqemud[83463]: 61907 Dec 2 03:31:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:31:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:31:40 localhost podman[83456]: 2025-12-02 08:31:40.087038078 +0000 UTC m=+0.095142157 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.) Dec 2 03:31:40 localhost podman[83456]: 2025-12-02 08:31:40.100737592 +0000 UTC m=+0.108841711 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com) Dec 2 03:31:40 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:31:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:31:43 localhost systemd[1]: tmp-crun.zFKLPt.mount: Deactivated successfully. Dec 2 03:31:43 localhost podman[83479]: 2025-12-02 08:31:43.065935034 +0000 UTC m=+0.077188670 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, architecture=x86_64, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, config_id=tripleo_step3) Dec 2 03:31:43 localhost podman[83479]: 2025-12-02 08:31:43.10164994 +0000 UTC m=+0.112903606 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid) Dec 2 03:31:43 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:31:56 localhost podman[83543]: 2025-12-02 08:31:56.074285618 +0000 UTC m=+0.081256766 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, vcs-type=git, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:31:56 localhost podman[83543]: 2025-12-02 08:31:56.279873572 +0000 UTC m=+0.286844690 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:31:56 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:32:00 localhost sshd[83650]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:32:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:32:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:32:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:32:01 localhost podman[83652]: 2025-12-02 08:32:01.240647022 +0000 UTC m=+0.102127322 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:32:01 localhost podman[83652]: 2025-12-02 08:32:01.277827973 +0000 UTC m=+0.139308263 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 2 03:32:01 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:32:01 localhost podman[83669]: 2025-12-02 08:32:01.338822461 +0000 UTC m=+0.095181748 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:32:01 localhost podman[83669]: 2025-12-02 08:32:01.369934654 +0000 UTC m=+0.126293931 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:32:01 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:32:01 localhost podman[83671]: 2025-12-02 08:32:01.390598404 +0000 UTC m=+0.142861963 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:32:01 localhost podman[83671]: 2025-12-02 08:32:01.416932379 +0000 UTC m=+0.169195868 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:32:01 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:32:04 localhost podman[83725]: 2025-12-02 08:32:04.055555492 +0000 UTC m=+0.060968259 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 2 03:32:04 localhost podman[83725]: 2025-12-02 08:32:04.080878375 +0000 UTC m=+0.086291132 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:32:04 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:32:04 localhost systemd[1]: tmp-crun.evZp44.mount: Deactivated successfully. Dec 2 03:32:04 localhost podman[83726]: 2025-12-02 08:32:04.128525631 +0000 UTC m=+0.126643662 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4) Dec 2 03:32:04 localhost podman[83726]: 2025-12-02 08:32:04.524133197 +0000 UTC m=+0.522251218 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:32:04 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:32:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:32:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:32:07 localhost systemd[1]: tmp-crun.fahlUm.mount: Deactivated successfully. Dec 2 03:32:07 localhost podman[83775]: 2025-12-02 08:32:07.088497271 +0000 UTC m=+0.083344492 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:32:07 localhost podman[83774]: 2025-12-02 08:32:07.124606097 +0000 UTC m=+0.124235406 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent) Dec 2 03:32:07 localhost podman[83775]: 2025-12-02 08:32:07.14406354 +0000 UTC m=+0.138910771 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, release=1761123044, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 2 03:32:07 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:32:07 localhost podman[83774]: 2025-12-02 08:32:07.19930899 +0000 UTC m=+0.198938269 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z) Dec 2 03:32:07 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:32:11 localhost systemd[1]: tmp-crun.EORE3i.mount: Deactivated successfully. Dec 2 03:32:11 localhost podman[83820]: 2025-12-02 08:32:11.08085041 +0000 UTC m=+0.084330982 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public) Dec 2 03:32:11 localhost podman[83820]: 2025-12-02 08:32:11.117926677 +0000 UTC m=+0.121407229 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, container_name=collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:32:11 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:32:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:32:14 localhost podman[83841]: 2025-12-02 08:32:14.074722049 +0000 UTC m=+0.082189085 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, release=1761123044, distribution-scope=public) Dec 2 03:32:14 localhost podman[83841]: 2025-12-02 08:32:14.082436508 +0000 UTC m=+0.089903514 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:32:14 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:32:27 localhost systemd[1]: tmp-crun.7lnhAG.mount: Deactivated successfully. Dec 2 03:32:27 localhost podman[83861]: 2025-12-02 08:32:27.567348904 +0000 UTC m=+0.573210366 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, release=1761123044, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:32:27 localhost podman[83861]: 2025-12-02 08:32:27.761266907 +0000 UTC m=+0.767128389 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-qdrouterd) Dec 2 03:32:27 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:32:32 localhost podman[83891]: 2025-12-02 08:32:32.095623424 +0000 UTC m=+0.093708171 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Dec 2 03:32:32 localhost podman[83891]: 2025-12-02 08:32:32.129250345 +0000 UTC m=+0.127335142 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:32:32 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:32:32 localhost podman[83892]: 2025-12-02 08:32:32.1526602 +0000 UTC m=+0.149748567 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:32:32 localhost systemd[1]: tmp-crun.9FrVZB.mount: Deactivated successfully. Dec 2 03:32:32 localhost podman[83890]: 2025-12-02 08:32:32.187656893 +0000 UTC m=+0.187967630 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, vcs-type=git, distribution-scope=public) Dec 2 03:32:32 localhost podman[83892]: 2025-12-02 08:32:32.217636142 +0000 UTC m=+0.214724519 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 2 03:32:32 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:32:32 localhost podman[83890]: 2025-12-02 08:32:32.274276825 +0000 UTC m=+0.274587572 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:32:32 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:32:35 localhost podman[83962]: 2025-12-02 08:32:35.060173785 +0000 UTC m=+0.070935276 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute) Dec 2 03:32:35 localhost podman[83963]: 2025-12-02 08:32:35.084561531 +0000 UTC m=+0.086837700 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4) Dec 2 03:32:35 localhost podman[83962]: 2025-12-02 08:32:35.143904527 +0000 UTC m=+0.154666038 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step5, vendor=Red Hat, Inc.) Dec 2 03:32:35 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:32:35 localhost podman[83963]: 2025-12-02 08:32:35.430709417 +0000 UTC m=+0.432985536 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4) Dec 2 03:32:35 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:32:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:32:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:32:38 localhost systemd[1]: tmp-crun.ZxvTgL.mount: Deactivated successfully. Dec 2 03:32:38 localhost podman[84008]: 2025-12-02 08:32:38.070492895 +0000 UTC m=+0.074137525 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4) Dec 2 03:32:38 localhost systemd[1]: tmp-crun.KcGvne.mount: Deactivated successfully. Dec 2 03:32:38 localhost podman[84009]: 2025-12-02 08:32:38.103469686 +0000 UTC m=+0.105302361 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 2 03:32:38 localhost podman[84009]: 2025-12-02 08:32:38.128837191 +0000 UTC m=+0.130669906 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:32:38 localhost podman[84008]: 2025-12-02 08:32:38.139748629 +0000 UTC m=+0.143393239 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:32:38 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:32:38 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:32:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:32:42 localhost podman[84055]: 2025-12-02 08:32:42.055911281 +0000 UTC m=+0.066532441 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:32:42 localhost podman[84055]: 2025-12-02 08:32:42.067779828 +0000 UTC m=+0.078401028 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:32:42 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:32:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:32:45 localhost podman[84075]: 2025-12-02 08:32:45.071368248 +0000 UTC m=+0.079861252 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:32:45 localhost podman[84075]: 2025-12-02 08:32:45.083902487 +0000 UTC m=+0.092395521 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12) Dec 2 03:32:45 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:32:58 localhost podman[84140]: 2025-12-02 08:32:58.08091524 +0000 UTC m=+0.082889686 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:32:58 localhost podman[84140]: 2025-12-02 08:32:58.236656002 +0000 UTC m=+0.238630378 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public) Dec 2 03:32:58 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:33:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:33:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:33:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:33:03 localhost podman[84245]: 2025-12-02 08:33:03.085042381 +0000 UTC m=+0.087912862 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z) Dec 2 03:33:03 localhost podman[84245]: 2025-12-02 08:33:03.115484774 +0000 UTC m=+0.118355275 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12) Dec 2 03:33:03 localhost systemd[1]: tmp-crun.Rhe46F.mount: Deactivated successfully. Dec 2 03:33:03 localhost podman[84244]: 2025-12-02 08:33:03.135553565 +0000 UTC m=+0.137716674 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:33:03 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:33:03 localhost podman[84244]: 2025-12-02 08:33:03.174786519 +0000 UTC m=+0.176949678 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:33:03 localhost systemd[1]: tmp-crun.xp0LGM.mount: Deactivated successfully. Dec 2 03:33:03 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:33:03 localhost podman[84246]: 2025-12-02 08:33:03.193680985 +0000 UTC m=+0.188850657 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:33:03 localhost podman[84246]: 2025-12-02 08:33:03.225841911 +0000 UTC m=+0.221011593 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4) Dec 2 03:33:03 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:33:06 localhost podman[84315]: 2025-12-02 08:33:06.089441278 +0000 UTC m=+0.089246774 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:36:58Z) Dec 2 03:33:06 localhost podman[84315]: 2025-12-02 08:33:06.125033439 +0000 UTC m=+0.124838975 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 2 03:33:06 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:33:06 localhost podman[84316]: 2025-12-02 08:33:06.142623284 +0000 UTC m=+0.139998235 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:33:06 localhost podman[84316]: 2025-12-02 08:33:06.5291762 +0000 UTC m=+0.526551111 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:33:06 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:33:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:33:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:33:09 localhost podman[84364]: 2025-12-02 08:33:09.072423172 +0000 UTC m=+0.077084078 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:33:09 localhost podman[84364]: 2025-12-02 08:33:09.10793552 +0000 UTC m=+0.112596456 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:33:09 localhost systemd[1]: tmp-crun.NYxfVO.mount: Deactivated successfully. Dec 2 03:33:09 localhost podman[84365]: 2025-12-02 08:33:09.13732675 +0000 UTC m=+0.139926072 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 03:33:09 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:33:09 localhost podman[84365]: 2025-12-02 08:33:09.212724175 +0000 UTC m=+0.215323497 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:33:09 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:33:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:33:13 localhost podman[84411]: 2025-12-02 08:33:13.077823915 +0000 UTC m=+0.084781785 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:33:13 localhost podman[84411]: 2025-12-02 08:33:13.086432892 +0000 UTC m=+0.093390802 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:33:13 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:33:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:33:16 localhost systemd[1]: tmp-crun.7u8SSe.mount: Deactivated successfully. Dec 2 03:33:16 localhost podman[84431]: 2025-12-02 08:33:16.082002945 +0000 UTC m=+0.089203423 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 03:33:16 localhost podman[84431]: 2025-12-02 08:33:16.121005942 +0000 UTC m=+0.128206400 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:33:16 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:33:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:33:29 localhost recover_tripleo_nova_virtqemud[84452]: 61907 Dec 2 03:33:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:33:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:33:29 localhost podman[84450]: 2025-12-02 08:33:29.088694588 +0000 UTC m=+0.085877820 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:33:29 localhost podman[84450]: 2025-12-02 08:33:29.307176452 +0000 UTC m=+0.304359684 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:33:29 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:33:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:33:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:33:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:33:34 localhost podman[84483]: 2025-12-02 08:33:34.093252603 +0000 UTC m=+0.089229094 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:33:34 localhost podman[84482]: 2025-12-02 08:33:34.07377867 +0000 UTC m=+0.079342527 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Dec 2 03:33:34 localhost podman[84487]: 2025-12-02 08:33:34.134556932 +0000 UTC m=+0.129122938 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi) Dec 2 03:33:34 localhost podman[84483]: 2025-12-02 08:33:34.146893963 +0000 UTC m=+0.142870384 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:33:34 localhost podman[84487]: 2025-12-02 08:33:34.160844045 +0000 UTC m=+0.155410091 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com) Dec 2 03:33:34 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:33:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:33:34 localhost podman[84482]: 2025-12-02 08:33:34.211490893 +0000 UTC m=+0.217054770 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:33:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:33:35 localhost systemd[1]: tmp-crun.S1kcyT.mount: Deactivated successfully. Dec 2 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:33:37 localhost podman[84556]: 2025-12-02 08:33:37.075509044 +0000 UTC m=+0.076526211 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public) Dec 2 03:33:37 localhost podman[84555]: 2025-12-02 08:33:37.13613009 +0000 UTC m=+0.139017254 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1) Dec 2 03:33:37 localhost podman[84555]: 2025-12-02 08:33:37.159639328 +0000 UTC m=+0.162526522 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:33:37 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:33:37 localhost podman[84556]: 2025-12-02 08:33:37.543039597 +0000 UTC m=+0.544056804 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:33:37 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:33:40 localhost podman[84601]: 2025-12-02 08:33:40.067283189 +0000 UTC m=+0.064175658 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true) Dec 2 03:33:40 localhost podman[84601]: 2025-12-02 08:33:40.095854974 +0000 UTC m=+0.092747433 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:33:40 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:33:40 localhost systemd[1]: tmp-crun.zN62g3.mount: Deactivated successfully. Dec 2 03:33:40 localhost podman[84600]: 2025-12-02 08:33:40.190012179 +0000 UTC m=+0.187124634 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 2 03:33:40 localhost podman[84600]: 2025-12-02 08:33:40.257264351 +0000 UTC m=+0.254376816 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12) Dec 2 03:33:40 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:33:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:33:44 localhost podman[84650]: 2025-12-02 08:33:44.069319258 +0000 UTC m=+0.074252349 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Dec 2 03:33:44 localhost podman[84650]: 2025-12-02 08:33:44.10683951 +0000 UTC m=+0.111772651 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1) Dec 2 03:33:44 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:33:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:33:47 localhost podman[84670]: 2025-12-02 08:33:47.076671265 +0000 UTC m=+0.081259606 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid) Dec 2 03:33:47 localhost podman[84670]: 2025-12-02 08:33:47.086709465 +0000 UTC m=+0.091297746 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:33:47 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:34:00 localhost podman[84734]: 2025-12-02 08:34:00.075639097 +0000 UTC m=+0.079261185 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Dec 2 03:34:00 localhost podman[84734]: 2025-12-02 08:34:00.290829069 +0000 UTC m=+0.294451157 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:34:00 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:34:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:34:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:34:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:34:05 localhost systemd[1]: tmp-crun.vEit7F.mount: Deactivated successfully. Dec 2 03:34:05 localhost podman[84842]: 2025-12-02 08:34:05.063742343 +0000 UTC m=+0.070964319 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:34:05 localhost podman[84841]: 2025-12-02 08:34:05.109470338 +0000 UTC m=+0.116560920 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, name=rhosp17/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond) Dec 2 03:34:05 localhost podman[84841]: 2025-12-02 08:34:05.116223477 +0000 UTC m=+0.123314049 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Dec 2 03:34:05 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:34:05 localhost podman[84842]: 2025-12-02 08:34:05.160501247 +0000 UTC m=+0.167723223 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:34:05 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:34:05 localhost podman[84843]: 2025-12-02 08:34:05.17641102 +0000 UTC m=+0.181581202 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git) Dec 2 03:34:05 localhost podman[84843]: 2025-12-02 08:34:05.222971141 +0000 UTC m=+0.228141283 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4) Dec 2 03:34:05 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:34:06 localhost systemd[1]: tmp-crun.1JXpZJ.mount: Deactivated successfully. Dec 2 03:34:07 localhost sshd[84913]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:34:08 localhost podman[84916]: 2025-12-02 08:34:08.040322417 +0000 UTC m=+0.083697092 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:34:08 localhost systemd[1]: tmp-crun.yabzFk.mount: Deactivated successfully. Dec 2 03:34:08 localhost podman[84915]: 2025-12-02 08:34:08.067710675 +0000 UTC m=+0.119508481 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute) Dec 2 03:34:08 localhost podman[84915]: 2025-12-02 08:34:08.092891165 +0000 UTC m=+0.144688961 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:34:08 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:34:08 localhost podman[84916]: 2025-12-02 08:34:08.380895241 +0000 UTC m=+0.424269876 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.) Dec 2 03:34:08 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:34:11 localhost podman[84962]: 2025-12-02 08:34:11.073088901 +0000 UTC m=+0.078082028 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:34:11 localhost podman[84962]: 2025-12-02 08:34:11.113471011 +0000 UTC m=+0.118464118 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:34:11 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:34:11 localhost podman[84963]: 2025-12-02 08:34:11.131486619 +0000 UTC m=+0.134868607 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:34:11 localhost podman[84963]: 2025-12-02 08:34:11.150793976 +0000 UTC m=+0.154175964 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:34:11 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:34:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:34:15 localhost systemd[1]: tmp-crun.6ylebD.mount: Deactivated successfully. Dec 2 03:34:15 localhost podman[85009]: 2025-12-02 08:34:15.078425462 +0000 UTC m=+0.084741865 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:34:15 localhost podman[85009]: 2025-12-02 08:34:15.115037995 +0000 UTC m=+0.121354368 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 2 03:34:15 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:34:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:34:18 localhost podman[85031]: 2025-12-02 08:34:18.053900673 +0000 UTC m=+0.064498098 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:34:18 localhost podman[85031]: 2025-12-02 08:34:18.090782454 +0000 UTC m=+0.101379799 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1761123044, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:34:18 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:34:31 localhost systemd[1]: tmp-crun.w4yu1Z.mount: Deactivated successfully. Dec 2 03:34:31 localhost podman[85050]: 2025-12-02 08:34:31.070561022 +0000 UTC m=+0.080027108 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:34:31 localhost podman[85050]: 2025-12-02 08:34:31.265693103 +0000 UTC m=+0.275159179 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr) Dec 2 03:34:31 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:34:36 localhost podman[85081]: 2025-12-02 08:34:36.080793622 +0000 UTC m=+0.085541510 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:34:36 localhost podman[85080]: 2025-12-02 08:34:36.130143029 +0000 UTC m=+0.135982020 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:34:36 localhost podman[85080]: 2025-12-02 08:34:36.138685473 +0000 UTC m=+0.144524435 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:34:36 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:34:36 localhost podman[85082]: 2025-12-02 08:34:36.196478523 +0000 UTC m=+0.196749212 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.expose-services=) Dec 2 03:34:36 localhost podman[85081]: 2025-12-02 08:34:36.210846778 +0000 UTC m=+0.215594646 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:34:36 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:34:36 localhost podman[85082]: 2025-12-02 08:34:36.225892193 +0000 UTC m=+0.226162882 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:34:36 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:34:39 localhost podman[85154]: 2025-12-02 08:34:39.068095769 +0000 UTC m=+0.076179560 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute) Dec 2 03:34:39 localhost systemd[1]: tmp-crun.2q3ohf.mount: Deactivated successfully. Dec 2 03:34:39 localhost podman[85155]: 2025-12-02 08:34:39.111610636 +0000 UTC m=+0.116736185 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:34:39 localhost podman[85154]: 2025-12-02 08:34:39.14341298 +0000 UTC m=+0.151496781 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.12, distribution-scope=public, architecture=x86_64) Dec 2 03:34:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:34:39 localhost podman[85155]: 2025-12-02 08:34:39.524662762 +0000 UTC m=+0.529788341 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container) Dec 2 03:34:39 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:34:42 localhost systemd[1]: tmp-crun.bkcBnq.mount: Deactivated successfully. Dec 2 03:34:42 localhost podman[85204]: 2025-12-02 08:34:42.072575927 +0000 UTC m=+0.077995235 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, tcib_managed=true) Dec 2 03:34:42 localhost podman[85205]: 2025-12-02 08:34:42.107484218 +0000 UTC m=+0.111150722 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:34:42 localhost podman[85204]: 2025-12-02 08:34:42.116232759 +0000 UTC m=+0.121652037 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Dec 2 03:34:42 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:34:42 localhost podman[85205]: 2025-12-02 08:34:42.157310221 +0000 UTC m=+0.160976775 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:34:05Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:34:42 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:34:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:34:46 localhost podman[85252]: 2025-12-02 08:34:46.078989892 +0000 UTC m=+0.087122878 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, config_id=tripleo_step3, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 2 03:34:46 localhost podman[85252]: 2025-12-02 08:34:46.095019578 +0000 UTC m=+0.103152594 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3) Dec 2 03:34:46 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:34:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:34:49 localhost podman[85272]: 2025-12-02 08:34:49.057434375 +0000 UTC m=+0.068113969 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 2 03:34:49 localhost podman[85272]: 2025-12-02 08:34:49.066819355 +0000 UTC m=+0.077498989 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:34:49 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:35:01 localhost podman[85351]: 2025-12-02 08:35:01.893742322 +0000 UTC m=+0.100169712 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:35:02 localhost podman[85351]: 2025-12-02 08:35:02.112789003 +0000 UTC m=+0.319216393 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step1) Dec 2 03:35:02 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:35:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:35:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:35:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:35:07 localhost systemd[1]: tmp-crun.BJWyI4.mount: Deactivated successfully. Dec 2 03:35:07 localhost podman[85442]: 2025-12-02 08:35:07.143228318 +0000 UTC m=+0.138429686 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64) Dec 2 03:35:07 localhost systemd[1]: tmp-crun.WrWuJB.mount: Deactivated successfully. Dec 2 03:35:07 localhost podman[85443]: 2025-12-02 08:35:07.176602092 +0000 UTC m=+0.173604246 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1) Dec 2 03:35:07 localhost podman[85444]: 2025-12-02 08:35:07.213647068 +0000 UTC m=+0.208550246 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:35:07 localhost podman[85443]: 2025-12-02 08:35:07.227806877 +0000 UTC m=+0.224809021 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:35:07 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:35:07 localhost podman[85444]: 2025-12-02 08:35:07.265888696 +0000 UTC m=+0.260791894 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:35:07 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:35:07 localhost podman[85442]: 2025-12-02 08:35:07.277575647 +0000 UTC m=+0.272777075 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Dec 2 03:35:07 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:35:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:35:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:35:10 localhost podman[85512]: 2025-12-02 08:35:10.073574953 +0000 UTC m=+0.079133481 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 2 03:35:10 localhost podman[85512]: 2025-12-02 08:35:10.139611617 +0000 UTC m=+0.145170165 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:35:10 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:35:10 localhost podman[85513]: 2025-12-02 08:35:10.140731272 +0000 UTC m=+0.141912965 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:35:10 localhost podman[85513]: 2025-12-02 08:35:10.51311821 +0000 UTC m=+0.514299953 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1) Dec 2 03:35:10 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:35:12 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:35:13 localhost recover_tripleo_nova_virtqemud[85568]: 61907 Dec 2 03:35:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:35:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:35:13 localhost podman[85560]: 2025-12-02 08:35:13.084672017 +0000 UTC m=+0.084242039 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 03:35:13 localhost podman[85560]: 2025-12-02 08:35:13.11901562 +0000 UTC m=+0.118585602 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z) Dec 2 03:35:13 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:35:13 localhost systemd[1]: tmp-crun.VqQKu8.mount: Deactivated successfully. Dec 2 03:35:13 localhost podman[85561]: 2025-12-02 08:35:13.194667752 +0000 UTC m=+0.186843306 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true) Dec 2 03:35:13 localhost podman[85561]: 2025-12-02 08:35:13.242894344 +0000 UTC m=+0.235069848 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:35:13 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:35:17 localhost systemd[1]: tmp-crun.kD99f4.mount: Deactivated successfully. Dec 2 03:35:17 localhost podman[85608]: 2025-12-02 08:35:17.078706817 +0000 UTC m=+0.087555631 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Dec 2 03:35:17 localhost podman[85608]: 2025-12-02 08:35:17.089350736 +0000 UTC m=+0.098199520 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:35:17 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:35:20 localhost podman[85628]: 2025-12-02 08:35:20.070905785 +0000 UTC m=+0.074590310 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid) Dec 2 03:35:20 localhost podman[85628]: 2025-12-02 08:35:20.08076164 +0000 UTC m=+0.084446145 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:35:20 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:35:33 localhost systemd[1]: tmp-crun.gt0TK7.mount: Deactivated successfully. Dec 2 03:35:33 localhost podman[85647]: 2025-12-02 08:35:33.085008929 +0000 UTC m=+0.088909984 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:35:33 localhost podman[85647]: 2025-12-02 08:35:33.306191676 +0000 UTC m=+0.310092741 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12) Dec 2 03:35:33 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:35:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:35:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:35:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:35:38 localhost systemd[1]: tmp-crun.ziwUwL.mount: Deactivated successfully. Dec 2 03:35:38 localhost podman[85678]: 2025-12-02 08:35:38.070425291 +0000 UTC m=+0.074792797 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4) Dec 2 03:35:38 localhost podman[85678]: 2025-12-02 08:35:38.093964459 +0000 UTC m=+0.098331895 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:35:38 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:35:38 localhost podman[85677]: 2025-12-02 08:35:38.174324727 +0000 UTC m=+0.178813496 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:35:38 localhost podman[85677]: 2025-12-02 08:35:38.206791352 +0000 UTC m=+0.211280061 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z) Dec 2 03:35:38 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:35:38 localhost podman[85679]: 2025-12-02 08:35:38.217428141 +0000 UTC m=+0.218691611 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:35:38 localhost podman[85679]: 2025-12-02 08:35:38.264526429 +0000 UTC m=+0.265789879 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:35:38 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:35:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:35:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:35:41 localhost podman[85749]: 2025-12-02 08:35:41.058507741 +0000 UTC m=+0.064868539 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:35:41 localhost podman[85749]: 2025-12-02 08:35:41.078821541 +0000 UTC m=+0.085182359 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:35:41 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:35:41 localhost podman[85750]: 2025-12-02 08:35:41.132382659 +0000 UTC m=+0.135796626 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:35:41 localhost podman[85750]: 2025-12-02 08:35:41.461968981 +0000 UTC m=+0.465382988 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 2 03:35:41 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:35:44 localhost podman[85801]: 2025-12-02 08:35:44.084857167 +0000 UTC m=+0.092277117 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git) Dec 2 03:35:44 localhost systemd[1]: tmp-crun.kHCejB.mount: Deactivated successfully. Dec 2 03:35:44 localhost podman[85801]: 2025-12-02 08:35:44.14280082 +0000 UTC m=+0.150220800 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:35:44 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:35:44 localhost podman[85800]: 2025-12-02 08:35:44.228739101 +0000 UTC m=+0.233984064 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:35:44 localhost podman[85800]: 2025-12-02 08:35:44.260735981 +0000 UTC m=+0.265980934 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 2 03:35:44 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:35:48 localhost podman[85849]: 2025-12-02 08:35:48.075630087 +0000 UTC m=+0.083436764 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 2 03:35:48 localhost podman[85849]: 2025-12-02 08:35:48.084234963 +0000 UTC m=+0.092041630 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, io.openshift.expose-services=) Dec 2 03:35:48 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:35:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:35:51 localhost systemd[1]: tmp-crun.DDUE3I.mount: Deactivated successfully. Dec 2 03:35:51 localhost podman[85869]: 2025-12-02 08:35:51.058606601 +0000 UTC m=+0.068894984 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Dec 2 03:35:51 localhost podman[85869]: 2025-12-02 08:35:51.094965637 +0000 UTC m=+0.105253970 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:35:51 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:36:03 localhost podman[85949]: 2025-12-02 08:36:03.465177976 +0000 UTC m=+0.083313190 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, tcib_managed=true, container_name=metrics_qdr, release=1761123044, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:36:03 localhost podman[85949]: 2025-12-02 08:36:03.690494111 +0000 UTC m=+0.308629225 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:36:03 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:36:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:36:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:36:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:36:09 localhost podman[86042]: 2025-12-02 08:36:09.076939916 +0000 UTC m=+0.077318995 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:36:09 localhost systemd[1]: tmp-crun.G5Io8U.mount: Deactivated successfully. Dec 2 03:36:09 localhost podman[86041]: 2025-12-02 08:36:09.165452976 +0000 UTC m=+0.165296848 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64) Dec 2 03:36:09 localhost podman[86042]: 2025-12-02 08:36:09.188955873 +0000 UTC m=+0.189334962 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 2 03:36:09 localhost podman[86041]: 2025-12-02 08:36:09.194288388 +0000 UTC m=+0.194132270 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:36:09 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:36:09 localhost podman[86040]: 2025-12-02 08:36:09.14297098 +0000 UTC m=+0.143243966 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:36:09 localhost podman[86040]: 2025-12-02 08:36:09.279999971 +0000 UTC m=+0.280272977 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:36:09 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:36:09 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:36:12 localhost podman[86112]: 2025-12-02 08:36:12.099751571 +0000 UTC m=+0.100392539 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.41.4) Dec 2 03:36:12 localhost podman[86112]: 2025-12-02 08:36:12.125352284 +0000 UTC m=+0.125993212 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 2 03:36:12 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:36:12 localhost podman[86113]: 2025-12-02 08:36:12.195605838 +0000 UTC m=+0.188725663 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:36:12 localhost podman[86113]: 2025-12-02 08:36:12.599579414 +0000 UTC m=+0.592699189 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:36:12 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:36:15 localhost podman[86162]: 2025-12-02 08:36:15.076956525 +0000 UTC m=+0.083097484 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, version=17.1.12) Dec 2 03:36:15 localhost podman[86161]: 2025-12-02 08:36:15.126375394 +0000 UTC m=+0.133507363 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 03:36:15 localhost podman[86161]: 2025-12-02 08:36:15.162768011 +0000 UTC m=+0.169899980 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git) Dec 2 03:36:15 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:36:15 localhost podman[86162]: 2025-12-02 08:36:15.179907382 +0000 UTC m=+0.186048351 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:36:15 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:36:17 localhost sshd[86209]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:36:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:36:19 localhost podman[86211]: 2025-12-02 08:36:19.082532105 +0000 UTC m=+0.086097056 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:36:19 localhost podman[86211]: 2025-12-02 08:36:19.096975552 +0000 UTC m=+0.100540503 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:36:19 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:36:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:36:22 localhost podman[86230]: 2025-12-02 08:36:22.083301479 +0000 UTC m=+0.090006407 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git) Dec 2 03:36:22 localhost podman[86230]: 2025-12-02 08:36:22.126012041 +0000 UTC m=+0.132716979 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:36:22 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:36:34 localhost podman[86248]: 2025-12-02 08:36:34.077377951 +0000 UTC m=+0.082239516 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=) Dec 2 03:36:34 localhost podman[86248]: 2025-12-02 08:36:34.262310465 +0000 UTC m=+0.267171970 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, release=1761123044, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=) Dec 2 03:36:34 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:36:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:36:35 localhost recover_tripleo_nova_virtqemud[86278]: 61907 Dec 2 03:36:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:36:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:36:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:36:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:36:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:36:40 localhost podman[86281]: 2025-12-02 08:36:40.083731646 +0000 UTC m=+0.085098117 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:36:40 localhost podman[86279]: 2025-12-02 08:36:40.129233465 +0000 UTC m=+0.133889400 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Dec 2 03:36:40 localhost podman[86281]: 2025-12-02 08:36:40.143961914 +0000 UTC m=+0.145328405 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:36:40 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:36:40 localhost podman[86280]: 2025-12-02 08:36:40.187531736 +0000 UTC m=+0.190335299 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4) Dec 2 03:36:40 localhost podman[86279]: 2025-12-02 08:36:40.215253507 +0000 UTC m=+0.219909482 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:36:40 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:36:40 localhost podman[86280]: 2025-12-02 08:36:40.245067268 +0000 UTC m=+0.247870841 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:36:40 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:36:43 localhost podman[86351]: 2025-12-02 08:36:43.082252977 +0000 UTC m=+0.089674298 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64) Dec 2 03:36:43 localhost podman[86352]: 2025-12-02 08:36:43.13004837 +0000 UTC m=+0.135483265 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 2 03:36:43 localhost podman[86351]: 2025-12-02 08:36:43.159311854 +0000 UTC m=+0.166733185 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 2 03:36:43 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:36:43 localhost podman[86352]: 2025-12-02 08:36:43.498880308 +0000 UTC m=+0.504315183 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:36:43 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:36:46 localhost podman[86400]: 2025-12-02 08:36:46.079185811 +0000 UTC m=+0.086156187 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Dec 2 03:36:46 localhost systemd[1]: tmp-crun.d4IGxh.mount: Deactivated successfully. Dec 2 03:36:46 localhost podman[86400]: 2025-12-02 08:36:46.133945373 +0000 UTC m=+0.140915779 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:36:46 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:36:46 localhost podman[86401]: 2025-12-02 08:36:46.139021318 +0000 UTC m=+0.142259318 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4) Dec 2 03:36:46 localhost podman[86401]: 2025-12-02 08:36:46.217328941 +0000 UTC m=+0.220566931 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:36:46 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:36:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:36:50 localhost systemd[1]: tmp-crun.0GXMyb.mount: Deactivated successfully. Dec 2 03:36:50 localhost podman[86450]: 2025-12-02 08:36:50.102711013 +0000 UTC m=+0.086171459 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=collectd, distribution-scope=public, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, batch=17.1_20251118.1) Dec 2 03:36:50 localhost podman[86450]: 2025-12-02 08:36:50.109866306 +0000 UTC m=+0.093326682 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3) Dec 2 03:36:50 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:36:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:36:53 localhost podman[86469]: 2025-12-02 08:36:53.063645351 +0000 UTC m=+0.066381555 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12) Dec 2 03:36:53 localhost podman[86469]: 2025-12-02 08:36:53.099334888 +0000 UTC m=+0.102071142 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:36:53 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:37:05 localhost podman[86546]: 2025-12-02 08:37:05.100383215 +0000 UTC m=+0.103423560 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr) Dec 2 03:37:05 localhost podman[86546]: 2025-12-02 08:37:05.29723591 +0000 UTC m=+0.300276195 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 2 03:37:05 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:37:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:37:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:37:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:37:11 localhost podman[86641]: 2025-12-02 08:37:11.062565842 +0000 UTC m=+0.062039480 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:37:11 localhost systemd[1]: tmp-crun.CokC8E.mount: Deactivated successfully. Dec 2 03:37:11 localhost podman[86640]: 2025-12-02 08:37:11.077609111 +0000 UTC m=+0.077719428 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, distribution-scope=public) Dec 2 03:37:11 localhost podman[86641]: 2025-12-02 08:37:11.086668479 +0000 UTC m=+0.086142107 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:37:11 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:37:11 localhost podman[86640]: 2025-12-02 08:37:11.099778423 +0000 UTC m=+0.099888750 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 2 03:37:11 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:37:11 localhost podman[86639]: 2025-12-02 08:37:11.162174392 +0000 UTC m=+0.164943015 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, release=1761123044, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container) Dec 2 03:37:11 localhost podman[86639]: 2025-12-02 08:37:11.172821046 +0000 UTC m=+0.175589709 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, container_name=logrotate_crond) Dec 2 03:37:11 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:37:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:37:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 447 writes, 1749 keys, 447 commit groups, 1.0 writes per commit group, ingest: 2.00 MB, 0.00 MB/s#012Interval WAL: 447 writes, 173 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:37:14 localhost podman[86712]: 2025-12-02 08:37:14.0761107 +0000 UTC m=+0.077773628 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 2 03:37:14 localhost podman[86711]: 2025-12-02 08:37:14.130603974 +0000 UTC m=+0.135897987 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute) Dec 2 03:37:14 localhost podman[86711]: 2025-12-02 08:37:14.159872639 +0000 UTC m=+0.165166662 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, version=17.1.12, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:37:14 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:37:14 localhost podman[86712]: 2025-12-02 08:37:14.476132358 +0000 UTC m=+0.477795336 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:37:14 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:37:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:37:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 505 writes, 1943 keys, 505 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s#012Interval WAL: 505 writes, 186 syncs, 2.72 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:37:17 localhost podman[86758]: 2025-12-02 08:37:17.050341817 +0000 UTC m=+0.054617688 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 2 03:37:17 localhost systemd[1]: tmp-crun.X697zo.mount: Deactivated successfully. Dec 2 03:37:17 localhost podman[86758]: 2025-12-02 08:37:17.097806681 +0000 UTC m=+0.102082552 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1) Dec 2 03:37:17 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:37:17 localhost podman[86759]: 2025-12-02 08:37:17.098684145 +0000 UTC m=+0.100031883 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=) Dec 2 03:37:17 localhost podman[86759]: 2025-12-02 08:37:17.180909831 +0000 UTC m=+0.182257559 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:37:17 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:37:21 localhost podman[86805]: 2025-12-02 08:37:21.071740707 +0000 UTC m=+0.079639203 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3) Dec 2 03:37:21 localhost podman[86805]: 2025-12-02 08:37:21.10975033 +0000 UTC m=+0.117648796 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:37:21 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:37:24 localhost podman[86825]: 2025-12-02 08:37:24.082228987 +0000 UTC m=+0.085074526 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container) Dec 2 03:37:24 localhost podman[86825]: 2025-12-02 08:37:24.118771489 +0000 UTC m=+0.121616998 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:37:24 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:37:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:37:36 localhost podman[86845]: 2025-12-02 08:37:36.085846308 +0000 UTC m=+0.088606507 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64) Dec 2 03:37:36 localhost podman[86845]: 2025-12-02 08:37:36.314152649 +0000 UTC m=+0.316912788 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12) Dec 2 03:37:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:37:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:37:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:37:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:37:42 localhost podman[86875]: 2025-12-02 08:37:42.0808749 +0000 UTC m=+0.082689109 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:37:42 localhost podman[86875]: 2025-12-02 08:37:42.085733239 +0000 UTC m=+0.087547438 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 2 03:37:42 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:37:42 localhost systemd[1]: tmp-crun.jOeZ81.mount: Deactivated successfully. Dec 2 03:37:42 localhost podman[86876]: 2025-12-02 08:37:42.139538323 +0000 UTC m=+0.141610749 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:37:42 localhost podman[86876]: 2025-12-02 08:37:42.167776138 +0000 UTC m=+0.169848544 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:37:42 localhost systemd[1]: tmp-crun.6a8Hbw.mount: Deactivated successfully. Dec 2 03:37:42 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:37:42 localhost podman[86877]: 2025-12-02 08:37:42.184862465 +0000 UTC m=+0.182677820 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:37:42 localhost podman[86877]: 2025-12-02 08:37:42.210712333 +0000 UTC m=+0.208527718 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:37:42 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:37:45 localhost podman[86946]: 2025-12-02 08:37:45.078188116 +0000 UTC m=+0.081183416 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 2 03:37:45 localhost systemd[1]: tmp-crun.xptAV5.mount: Deactivated successfully. Dec 2 03:37:45 localhost podman[86945]: 2025-12-02 08:37:45.137349023 +0000 UTC m=+0.143301328 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:37:45 localhost podman[86945]: 2025-12-02 08:37:45.167824492 +0000 UTC m=+0.173776797 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git) Dec 2 03:37:45 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:37:45 localhost podman[86946]: 2025-12-02 08:37:45.443908826 +0000 UTC m=+0.446904136 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:37:45 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:37:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:37:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:37:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:37:48 localhost recover_tripleo_nova_virtqemud[87005]: 61907 Dec 2 03:37:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:37:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:37:48 localhost systemd[1]: tmp-crun.Q8HDUK.mount: Deactivated successfully. Dec 2 03:37:48 localhost podman[86993]: 2025-12-02 08:37:48.086950908 +0000 UTC m=+0.086360154 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:37:48 localhost podman[86993]: 2025-12-02 08:37:48.112665051 +0000 UTC m=+0.112074317 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller) Dec 2 03:37:48 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:37:48 localhost podman[86992]: 2025-12-02 08:37:48.135024339 +0000 UTC m=+0.137598805 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:37:48 localhost podman[86992]: 2025-12-02 08:37:48.202939186 +0000 UTC m=+0.205513592 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com) Dec 2 03:37:48 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:37:49 localhost systemd[1]: tmp-crun.Ly8xlf.mount: Deactivated successfully. Dec 2 03:37:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:37:52 localhost podman[87042]: 2025-12-02 08:37:52.084550639 +0000 UTC m=+0.084697136 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 2 03:37:52 localhost podman[87042]: 2025-12-02 08:37:52.122938594 +0000 UTC m=+0.123085101 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 2 03:37:52 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:37:55 localhost systemd[1]: tmp-crun.27AuaZ.mount: Deactivated successfully. Dec 2 03:37:55 localhost podman[87063]: 2025-12-02 08:37:55.081141524 +0000 UTC m=+0.082757461 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-iscsid) Dec 2 03:37:55 localhost podman[87063]: 2025-12-02 08:37:55.119122817 +0000 UTC m=+0.120738744 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:37:55 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:38:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:38:07 localhost podman[87127]: 2025-12-02 08:38:07.081000239 +0000 UTC m=+0.086812437 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, container_name=metrics_qdr, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:38:07 localhost podman[87127]: 2025-12-02 08:38:07.276746321 +0000 UTC m=+0.282558519 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:38:07 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:38:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:38:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:38:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:38:13 localhost podman[87283]: 2025-12-02 08:38:13.09845674 +0000 UTC m=+0.089645877 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 2 03:38:13 localhost podman[87283]: 2025-12-02 08:38:13.130469504 +0000 UTC m=+0.121658621 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:38:13 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:38:13 localhost podman[87281]: 2025-12-02 08:38:13.145820051 +0000 UTC m=+0.142244726 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1761123044, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:38:13 localhost podman[87281]: 2025-12-02 08:38:13.156670271 +0000 UTC m=+0.153094976 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4) Dec 2 03:38:13 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:38:13 localhost podman[87282]: 2025-12-02 08:38:13.200577073 +0000 UTC m=+0.195996230 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true) Dec 2 03:38:13 localhost podman[87282]: 2025-12-02 08:38:13.255809589 +0000 UTC m=+0.251228746 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:38:13 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:38:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:38:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:38:16 localhost podman[87354]: 2025-12-02 08:38:16.062298842 +0000 UTC m=+0.070078490 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git) Dec 2 03:38:16 localhost podman[87355]: 2025-12-02 08:38:16.120508212 +0000 UTC m=+0.123480592 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:38:16 localhost podman[87354]: 2025-12-02 08:38:16.140043959 +0000 UTC m=+0.147823657 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git) Dec 2 03:38:16 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:38:16 localhost podman[87355]: 2025-12-02 08:38:16.493849748 +0000 UTC m=+0.496822098 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044) Dec 2 03:38:16 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:38:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:38:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:38:19 localhost systemd[1]: tmp-crun.ostplU.mount: Deactivated successfully. Dec 2 03:38:19 localhost podman[87403]: 2025-12-02 08:38:19.064052985 +0000 UTC m=+0.068845815 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Dec 2 03:38:19 localhost podman[87403]: 2025-12-02 08:38:19.083022536 +0000 UTC m=+0.087815326 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git) Dec 2 03:38:19 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:38:19 localhost systemd[1]: tmp-crun.tTKR2A.mount: Deactivated successfully. Dec 2 03:38:19 localhost podman[87402]: 2025-12-02 08:38:19.117319264 +0000 UTC m=+0.121235448 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true) Dec 2 03:38:19 localhost podman[87402]: 2025-12-02 08:38:19.17681683 +0000 UTC m=+0.180733004 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ovn_metadata_agent) Dec 2 03:38:19 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:38:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:38:23 localhost podman[87448]: 2025-12-02 08:38:23.075052478 +0000 UTC m=+0.082222616 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z) Dec 2 03:38:23 localhost podman[87448]: 2025-12-02 08:38:23.090748555 +0000 UTC m=+0.097918683 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:38:23 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:38:26 localhost podman[87468]: 2025-12-02 08:38:26.070688116 +0000 UTC m=+0.079454456 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:38:26 localhost podman[87468]: 2025-12-02 08:38:26.084927472 +0000 UTC m=+0.093693842 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.4) Dec 2 03:38:26 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:38:28 localhost sshd[87487]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:38:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:38:38 localhost podman[87489]: 2025-12-02 08:38:38.071427075 +0000 UTC m=+0.076197384 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:38:38 localhost podman[87489]: 2025-12-02 08:38:38.268825123 +0000 UTC m=+0.273595412 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 2 03:38:38 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:38:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:38:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:38:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:38:44 localhost podman[87519]: 2025-12-02 08:38:44.085093479 +0000 UTC m=+0.078515390 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git) Dec 2 03:38:44 localhost podman[87519]: 2025-12-02 08:38:44.116142145 +0000 UTC m=+0.109564036 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com) Dec 2 03:38:44 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:38:44 localhost podman[87517]: 2025-12-02 08:38:44.195695193 +0000 UTC m=+0.194250501 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:38:44 localhost podman[87517]: 2025-12-02 08:38:44.234809528 +0000 UTC m=+0.233364866 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Dec 2 03:38:44 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:38:44 localhost podman[87518]: 2025-12-02 08:38:44.258718081 +0000 UTC m=+0.251824553 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4) Dec 2 03:38:44 localhost podman[87518]: 2025-12-02 08:38:44.315969833 +0000 UTC m=+0.309076285 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:38:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:38:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:38:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:38:47 localhost podman[87591]: 2025-12-02 08:38:47.07589926 +0000 UTC m=+0.080897899 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Dec 2 03:38:47 localhost podman[87591]: 2025-12-02 08:38:47.135923222 +0000 UTC m=+0.140921881 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:38:47 localhost systemd[1]: tmp-crun.vl1iFp.mount: Deactivated successfully. Dec 2 03:38:47 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:38:47 localhost podman[87592]: 2025-12-02 08:38:47.141724037 +0000 UTC m=+0.143756691 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:38:47 localhost podman[87592]: 2025-12-02 08:38:47.50686014 +0000 UTC m=+0.508892794 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Dec 2 03:38:47 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:38:50 localhost systemd[1]: tmp-crun.T9AtK8.mount: Deactivated successfully. Dec 2 03:38:50 localhost podman[87638]: 2025-12-02 08:38:50.057092746 +0000 UTC m=+0.065359885 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 2 03:38:50 localhost podman[87639]: 2025-12-02 08:38:50.13019539 +0000 UTC m=+0.131360026 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:38:50 localhost podman[87638]: 2025-12-02 08:38:50.148790181 +0000 UTC m=+0.157057310 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Dec 2 03:38:50 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:38:50 localhost podman[87639]: 2025-12-02 08:38:50.180872476 +0000 UTC m=+0.182037072 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:38:50 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:38:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:38:54 localhost systemd[1]: tmp-crun.UCiVkF.mount: Deactivated successfully. Dec 2 03:38:54 localhost podman[87685]: 2025-12-02 08:38:54.091654032 +0000 UTC m=+0.093130277 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:38:54 localhost podman[87685]: 2025-12-02 08:38:54.128793821 +0000 UTC m=+0.130270086 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:38:54 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:38:57 localhost podman[87706]: 2025-12-02 08:38:57.085028775 +0000 UTC m=+0.091287854 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:38:57 localhost podman[87706]: 2025-12-02 08:38:57.099751234 +0000 UTC m=+0.106010333 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=) Dec 2 03:38:57 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:39:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:39:09 localhost podman[87770]: 2025-12-02 08:39:09.085019575 +0000 UTC m=+0.090510943 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:39:09 localhost podman[87770]: 2025-12-02 08:39:09.286940193 +0000 UTC m=+0.292431561 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Dec 2 03:39:09 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:39:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:39:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:39:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:39:15 localhost podman[87876]: 2025-12-02 08:39:15.082417725 +0000 UTC m=+0.085566712 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:39:15 localhost podman[87875]: 2025-12-02 08:39:15.117638159 +0000 UTC m=+0.121468455 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:39:15 localhost podman[87875]: 2025-12-02 08:39:15.128694904 +0000 UTC m=+0.132525210 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Dec 2 03:39:15 localhost podman[87876]: 2025-12-02 08:39:15.13731026 +0000 UTC m=+0.140459257 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Dec 2 03:39:15 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:39:15 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:39:15 localhost podman[87877]: 2025-12-02 08:39:15.174214882 +0000 UTC m=+0.178104200 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible) Dec 2 03:39:15 localhost podman[87877]: 2025-12-02 08:39:15.198838284 +0000 UTC m=+0.202727582 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:39:15 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:39:16 localhost systemd[1]: tmp-crun.tMOTVf.mount: Deactivated successfully. Dec 2 03:39:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:39:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:39:18 localhost systemd[1]: tmp-crun.4yEucg.mount: Deactivated successfully. Dec 2 03:39:18 localhost podman[87950]: 2025-12-02 08:39:18.083117337 +0000 UTC m=+0.084683927 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, release=1761123044, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:39:18 localhost podman[87949]: 2025-12-02 08:39:18.132520255 +0000 UTC m=+0.138437409 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com) Dec 2 03:39:18 localhost podman[87949]: 2025-12-02 08:39:18.157915359 +0000 UTC m=+0.163832503 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:39:18 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:39:18 localhost podman[87950]: 2025-12-02 08:39:18.43776309 +0000 UTC m=+0.439329630 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:39:18 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:39:21 localhost podman[87999]: 2025-12-02 08:39:21.087966007 +0000 UTC m=+0.091798808 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:39:21 localhost podman[87999]: 2025-12-02 08:39:21.145398615 +0000 UTC m=+0.149231436 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller) Dec 2 03:39:21 localhost systemd[1]: tmp-crun.It5g2i.mount: Deactivated successfully. Dec 2 03:39:21 localhost podman[87998]: 2025-12-02 08:39:21.154386141 +0000 UTC m=+0.158046098 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com) Dec 2 03:39:21 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:39:21 localhost podman[87998]: 2025-12-02 08:39:21.203863642 +0000 UTC m=+0.207523609 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:39:21 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:39:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:39:25 localhost systemd[1]: tmp-crun.7QINUA.mount: Deactivated successfully. Dec 2 03:39:25 localhost podman[88047]: 2025-12-02 08:39:25.08175234 +0000 UTC m=+0.084565013 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 2 03:39:25 localhost podman[88047]: 2025-12-02 08:39:25.094903045 +0000 UTC m=+0.097715708 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:39:25 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:39:28 localhost podman[88067]: 2025-12-02 08:39:28.038760826 +0000 UTC m=+0.049649997 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:39:28 localhost podman[88067]: 2025-12-02 08:39:28.046963239 +0000 UTC m=+0.057852390 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:39:28 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:39:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:39:35 localhost recover_tripleo_nova_virtqemud[88088]: 61907 Dec 2 03:39:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:39:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:39:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:39:40 localhost podman[88090]: 2025-12-02 08:39:40.081907046 +0000 UTC m=+0.087637010 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 03:39:40 localhost podman[88090]: 2025-12-02 08:39:40.272870402 +0000 UTC m=+0.278600296 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:39:40 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:39:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:39:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:39:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:39:46 localhost systemd[1]: tmp-crun.qsRGT0.mount: Deactivated successfully. Dec 2 03:39:46 localhost podman[88119]: 2025-12-02 08:39:46.090602687 +0000 UTC m=+0.088675249 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, release=1761123044) Dec 2 03:39:46 localhost podman[88120]: 2025-12-02 08:39:46.154919221 +0000 UTC m=+0.150159062 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:39:46 localhost podman[88119]: 2025-12-02 08:39:46.172976697 +0000 UTC m=+0.171049249 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, release=1761123044, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:39:46 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:39:46 localhost podman[88120]: 2025-12-02 08:39:46.214204673 +0000 UTC m=+0.209444474 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc.) Dec 2 03:39:46 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:39:46 localhost podman[88121]: 2025-12-02 08:39:46.125913345 +0000 UTC m=+0.113493218 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12) Dec 2 03:39:46 localhost podman[88121]: 2025-12-02 08:39:46.262164281 +0000 UTC m=+0.249744134 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, architecture=x86_64) Dec 2 03:39:46 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:39:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:39:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:39:49 localhost podman[88191]: 2025-12-02 08:39:49.048782807 +0000 UTC m=+0.061727852 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:39:49 localhost systemd[1]: tmp-crun.VHOE7W.mount: Deactivated successfully. Dec 2 03:39:49 localhost podman[88192]: 2025-12-02 08:39:49.111875546 +0000 UTC m=+0.117829701 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc.) Dec 2 03:39:49 localhost podman[88191]: 2025-12-02 08:39:49.140886593 +0000 UTC m=+0.153831678 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:39:49 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:39:49 localhost podman[88192]: 2025-12-02 08:39:49.50477157 +0000 UTC m=+0.510725705 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:39:49 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:39:52 localhost systemd[1]: tmp-crun.TtulTE.mount: Deactivated successfully. Dec 2 03:39:52 localhost podman[88240]: 2025-12-02 08:39:52.103202722 +0000 UTC m=+0.099046266 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller) Dec 2 03:39:52 localhost podman[88239]: 2025-12-02 08:39:52.140214867 +0000 UTC m=+0.142874735 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:39:52 localhost podman[88240]: 2025-12-02 08:39:52.15295538 +0000 UTC m=+0.148798864 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, distribution-scope=public, tcib_managed=true) Dec 2 03:39:52 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:39:52 localhost podman[88239]: 2025-12-02 08:39:52.185990662 +0000 UTC m=+0.188650500 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:39:52 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:39:56 localhost podman[88286]: 2025-12-02 08:39:56.081700207 +0000 UTC m=+0.085629153 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:39:56 localhost podman[88286]: 2025-12-02 08:39:56.116444418 +0000 UTC m=+0.120373334 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:39:56 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:39:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:39:59 localhost systemd[1]: tmp-crun.LVeIrm.mount: Deactivated successfully. Dec 2 03:39:59 localhost podman[88305]: 2025-12-02 08:39:59.070567372 +0000 UTC m=+0.080227799 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, container_name=iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:39:59 localhost podman[88305]: 2025-12-02 08:39:59.107155686 +0000 UTC m=+0.116816103 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:39:59 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:40:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:40:11 localhost systemd[1]: tmp-crun.nwmvRx.mount: Deactivated successfully. Dec 2 03:40:11 localhost podman[88347]: 2025-12-02 08:40:11.092441685 +0000 UTC m=+0.096606596 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 2 03:40:11 localhost podman[88347]: 2025-12-02 08:40:11.310484083 +0000 UTC m=+0.314648914 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:40:11 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:40:13 localhost systemd[1]: tmp-crun.OxxWDN.mount: Deactivated successfully. Dec 2 03:40:13 localhost podman[88478]: 2025-12-02 08:40:13.790155497 +0000 UTC m=+0.068122104 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 2 03:40:13 localhost podman[88478]: 2025-12-02 08:40:13.896902701 +0000 UTC m=+0.174869248 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 03:40:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:40:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:40:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:40:17 localhost podman[88625]: 2025-12-02 08:40:17.134518619 +0000 UTC m=+0.128026232 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Dec 2 03:40:17 localhost podman[88624]: 2025-12-02 08:40:17.102862937 +0000 UTC m=+0.100318852 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:40:17 localhost podman[88624]: 2025-12-02 08:40:17.188935241 +0000 UTC m=+0.186391186 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=) Dec 2 03:40:17 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:40:17 localhost podman[88623]: 2025-12-02 08:40:17.228748357 +0000 UTC m=+0.228527739 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:40:17 localhost podman[88625]: 2025-12-02 08:40:17.243979771 +0000 UTC m=+0.237487384 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z) Dec 2 03:40:17 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:40:17 localhost podman[88623]: 2025-12-02 08:40:17.265930227 +0000 UTC m=+0.265709619 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:40:17 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:40:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:40:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:40:20 localhost podman[88694]: 2025-12-02 08:40:20.105492443 +0000 UTC m=+0.105066847 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:40:20 localhost podman[88694]: 2025-12-02 08:40:20.139791901 +0000 UTC m=+0.139366325 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 2 03:40:20 localhost systemd[1]: tmp-crun.MCtYz8.mount: Deactivated successfully. Dec 2 03:40:20 localhost podman[88695]: 2025-12-02 08:40:20.163876797 +0000 UTC m=+0.159587882 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:40:20 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:40:20 localhost podman[88695]: 2025-12-02 08:40:20.529233907 +0000 UTC m=+0.524944992 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 03:40:20 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:40:23 localhost podman[88743]: 2025-12-02 08:40:23.088429838 +0000 UTC m=+0.086626892 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:40:23 localhost systemd[1]: tmp-crun.hmnsE2.mount: Deactivated successfully. Dec 2 03:40:23 localhost podman[88744]: 2025-12-02 08:40:23.157734044 +0000 UTC m=+0.150668788 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 2 03:40:23 localhost podman[88743]: 2025-12-02 08:40:23.165309711 +0000 UTC m=+0.163506775 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:40:23 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:40:23 localhost podman[88744]: 2025-12-02 08:40:23.187405011 +0000 UTC m=+0.180339775 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller) Dec 2 03:40:23 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:40:27 localhost systemd[1]: tmp-crun.zr6gCM.mount: Deactivated successfully. Dec 2 03:40:27 localhost podman[88792]: 2025-12-02 08:40:27.071182065 +0000 UTC m=+0.071517270 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:40:27 localhost podman[88792]: 2025-12-02 08:40:27.078894915 +0000 UTC m=+0.079230120 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1) Dec 2 03:40:27 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:40:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:40:30 localhost podman[88812]: 2025-12-02 08:40:30.075157809 +0000 UTC m=+0.080141885 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible) Dec 2 03:40:30 localhost podman[88812]: 2025-12-02 08:40:30.107003508 +0000 UTC m=+0.111987544 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:40:30 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:40:32 localhost sshd[88832]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:40:42 localhost systemd[1]: tmp-crun.LfQ5YF.mount: Deactivated successfully. Dec 2 03:40:42 localhost podman[88834]: 2025-12-02 08:40:42.119516672 +0000 UTC m=+0.122099553 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:40:42 localhost podman[88834]: 2025-12-02 08:40:42.308060029 +0000 UTC m=+0.310642950 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:40:42 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:40:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:40:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:40:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:40:48 localhost podman[88866]: 2025-12-02 08:40:48.080270108 +0000 UTC m=+0.087580329 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:40:48 localhost podman[88866]: 2025-12-02 08:40:48.088720879 +0000 UTC m=+0.096031110 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, name=rhosp17/openstack-cron, io.openshift.expose-services=) Dec 2 03:40:48 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:40:48 localhost podman[88868]: 2025-12-02 08:40:48.05474774 +0000 UTC m=+0.056998646 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:40:48 localhost podman[88868]: 2025-12-02 08:40:48.132402764 +0000 UTC m=+0.134653690 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team) Dec 2 03:40:48 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:40:48 localhost podman[88867]: 2025-12-02 08:40:48.178252053 +0000 UTC m=+0.179883082 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044) Dec 2 03:40:48 localhost podman[88867]: 2025-12-02 08:40:48.23077603 +0000 UTC m=+0.232407059 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:40:48 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:40:51 localhost systemd[1]: tmp-crun.KD7nZO.mount: Deactivated successfully. Dec 2 03:40:51 localhost podman[88939]: 2025-12-02 08:40:51.081696121 +0000 UTC m=+0.087851346 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:40:51 localhost systemd[1]: tmp-crun.Bl4cBn.mount: Deactivated successfully. Dec 2 03:40:51 localhost podman[88940]: 2025-12-02 08:40:51.129163905 +0000 UTC m=+0.133465938 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Dec 2 03:40:51 localhost podman[88939]: 2025-12-02 08:40:51.159183151 +0000 UTC m=+0.165338406 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:40:51 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:40:51 localhost podman[88940]: 2025-12-02 08:40:51.530394356 +0000 UTC m=+0.534696439 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:40:51 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:40:54 localhost podman[88985]: 2025-12-02 08:40:54.084134064 +0000 UTC m=+0.083598795 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:40:54 localhost podman[88985]: 2025-12-02 08:40:54.125358569 +0000 UTC m=+0.124823270 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:40:54 localhost systemd[1]: tmp-crun.qnBagF.mount: Deactivated successfully. Dec 2 03:40:54 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:40:54 localhost podman[88986]: 2025-12-02 08:40:54.144201186 +0000 UTC m=+0.140066685 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller) Dec 2 03:40:54 localhost podman[88986]: 2025-12-02 08:40:54.167774419 +0000 UTC m=+0.163639948 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4) Dec 2 03:40:54 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:40:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:40:58 localhost recover_tripleo_nova_virtqemud[89036]: 61907 Dec 2 03:40:58 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:40:58 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:40:58 localhost podman[89034]: 2025-12-02 08:40:58.087608692 +0000 UTC m=+0.091580583 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, tcib_managed=true, architecture=x86_64, distribution-scope=public, release=1761123044) Dec 2 03:40:58 localhost podman[89034]: 2025-12-02 08:40:58.100775028 +0000 UTC m=+0.104746939 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-collectd) Dec 2 03:40:58 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:41:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:41:01 localhost podman[89056]: 2025-12-02 08:41:01.077099614 +0000 UTC m=+0.081947038 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4) Dec 2 03:41:01 localhost podman[89056]: 2025-12-02 08:41:01.113964825 +0000 UTC m=+0.118812229 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, name=rhosp17/openstack-iscsid, container_name=iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 2 03:41:01 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:41:13 localhost podman[89099]: 2025-12-02 08:41:13.057866302 +0000 UTC m=+0.064977873 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:41:13 localhost podman[89099]: 2025-12-02 08:41:13.237843205 +0000 UTC m=+0.244954686 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 2 03:41:13 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:41:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:41:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:41:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:41:19 localhost podman[89205]: 2025-12-02 08:41:19.095341906 +0000 UTC m=+0.093682774 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:41:19 localhost podman[89205]: 2025-12-02 08:41:19.109997813 +0000 UTC m=+0.108338761 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:41:19 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:41:19 localhost systemd[1]: tmp-crun.ybRkqf.mount: Deactivated successfully. Dec 2 03:41:19 localhost podman[89206]: 2025-12-02 08:41:19.208632896 +0000 UTC m=+0.205288485 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1) Dec 2 03:41:19 localhost podman[89207]: 2025-12-02 08:41:19.297574473 +0000 UTC m=+0.289730733 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:41:19 localhost podman[89206]: 2025-12-02 08:41:19.317310185 +0000 UTC m=+0.313965824 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:41:19 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:41:19 localhost podman[89207]: 2025-12-02 08:41:19.331205472 +0000 UTC m=+0.323361702 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64) Dec 2 03:41:19 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:41:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:41:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:41:22 localhost podman[89277]: 2025-12-02 08:41:22.058974691 +0000 UTC m=+0.063780270 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 2 03:41:22 localhost podman[89277]: 2025-12-02 08:41:22.075666617 +0000 UTC m=+0.080472206 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:41:22 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:41:22 localhost podman[89278]: 2025-12-02 08:41:22.159052935 +0000 UTC m=+0.160866099 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:41:22 localhost podman[89278]: 2025-12-02 08:41:22.500935145 +0000 UTC m=+0.502748309 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12) Dec 2 03:41:22 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:41:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:41:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:41:25 localhost systemd[1]: tmp-crun.iLWRWA.mount: Deactivated successfully. Dec 2 03:41:25 localhost podman[89327]: 2025-12-02 08:41:25.06870465 +0000 UTC m=+0.070797160 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git) Dec 2 03:41:25 localhost podman[89327]: 2025-12-02 08:41:25.120099596 +0000 UTC m=+0.122192146 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:41:25 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:41:25 localhost podman[89326]: 2025-12-02 08:41:25.121093694 +0000 UTC m=+0.127176747 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z) Dec 2 03:41:25 localhost podman[89326]: 2025-12-02 08:41:25.204952126 +0000 UTC m=+0.211035139 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:41:25 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:41:29 localhost podman[89373]: 2025-12-02 08:41:29.075962683 +0000 UTC m=+0.081223026 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:41:29 localhost podman[89373]: 2025-12-02 08:41:29.087857717 +0000 UTC m=+0.093118070 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:41:29 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:41:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:41:32 localhost systemd[1]: tmp-crun.25FuUf.mount: Deactivated successfully. Dec 2 03:41:32 localhost podman[89394]: 2025-12-02 08:41:32.072792848 +0000 UTC m=+0.075869723 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, tcib_managed=true, distribution-scope=public, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container) Dec 2 03:41:32 localhost podman[89394]: 2025-12-02 08:41:32.080508594 +0000 UTC m=+0.083585409 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:41:32 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:41:44 localhost systemd[1]: tmp-crun.Ba9CUP.mount: Deactivated successfully. Dec 2 03:41:44 localhost podman[89415]: 2025-12-02 08:41:44.08014474 +0000 UTC m=+0.086287981 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, config_id=tripleo_step1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:41:44 localhost podman[89415]: 2025-12-02 08:41:44.266836912 +0000 UTC m=+0.272980153 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:41:44 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:41:50 localhost podman[89445]: 2025-12-02 08:41:50.075989687 +0000 UTC m=+0.078067910 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:41:50 localhost podman[89445]: 2025-12-02 08:41:50.093321497 +0000 UTC m=+0.095399700 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044) Dec 2 03:41:50 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:41:50 localhost podman[89446]: 2025-12-02 08:41:50.132791344 +0000 UTC m=+0.129308497 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 2 03:41:50 localhost podman[89444]: 2025-12-02 08:41:50.182674161 +0000 UTC m=+0.184752054 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:41:50 localhost podman[89446]: 2025-12-02 08:41:50.189877382 +0000 UTC m=+0.186394465 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:41:50 localhost podman[89444]: 2025-12-02 08:41:50.19671645 +0000 UTC m=+0.198794373 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:41:50 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:41:50 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:41:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:41:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:41:53 localhost podman[89517]: 2025-12-02 08:41:53.084724805 +0000 UTC m=+0.082535737 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 2 03:41:53 localhost podman[89516]: 2025-12-02 08:41:53.14533863 +0000 UTC m=+0.142982785 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:41:53 localhost podman[89516]: 2025-12-02 08:41:53.175774811 +0000 UTC m=+0.173418906 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public) Dec 2 03:41:53 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:41:53 localhost podman[89517]: 2025-12-02 08:41:53.408970227 +0000 UTC m=+0.406781129 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:41:53 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:41:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:41:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:41:56 localhost systemd[1]: tmp-crun.yEqPCH.mount: Deactivated successfully. Dec 2 03:41:56 localhost podman[89564]: 2025-12-02 08:41:56.068794499 +0000 UTC m=+0.074823580 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:41:56 localhost podman[89565]: 2025-12-02 08:41:56.079162967 +0000 UTC m=+0.085280941 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, container_name=ovn_controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible) Dec 2 03:41:56 localhost podman[89564]: 2025-12-02 08:41:56.107821004 +0000 UTC m=+0.113850075 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:41:56 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:41:56 localhost podman[89565]: 2025-12-02 08:41:56.129893099 +0000 UTC m=+0.136011073 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 03:41:56 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:42:00 localhost podman[89612]: 2025-12-02 08:42:00.075338008 +0000 UTC m=+0.078652537 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd) Dec 2 03:42:00 localhost podman[89612]: 2025-12-02 08:42:00.084919382 +0000 UTC m=+0.088233951 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:42:00 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:42:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:42:03 localhost podman[89651]: 2025-12-02 08:42:03.066284163 +0000 UTC m=+0.071117197 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:42:03 localhost podman[89651]: 2025-12-02 08:42:03.079132126 +0000 UTC m=+0.083965170 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, container_name=iscsid) Dec 2 03:42:03 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:42:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:42:15 localhost recover_tripleo_nova_virtqemud[89675]: 61907 Dec 2 03:42:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:42:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:42:15 localhost systemd[1]: tmp-crun.gdh1li.mount: Deactivated successfully. Dec 2 03:42:15 localhost podman[89673]: 2025-12-02 08:42:15.088104666 +0000 UTC m=+0.093984796 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Dec 2 03:42:15 localhost podman[89673]: 2025-12-02 08:42:15.278892573 +0000 UTC m=+0.284772703 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:42:15 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:42:21 localhost podman[89782]: 2025-12-02 08:42:21.090944067 +0000 UTC m=+0.085321301 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:42:21 localhost systemd[1]: tmp-crun.WcPrud.mount: Deactivated successfully. Dec 2 03:42:21 localhost systemd[1]: tmp-crun.ah1BpU.mount: Deactivated successfully. Dec 2 03:42:21 localhost podman[89781]: 2025-12-02 08:42:21.140583736 +0000 UTC m=+0.134810986 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z) Dec 2 03:42:21 localhost podman[89781]: 2025-12-02 08:42:21.151702266 +0000 UTC m=+0.145929536 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, architecture=x86_64) Dec 2 03:42:21 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:42:21 localhost podman[89783]: 2025-12-02 08:42:21.200679934 +0000 UTC m=+0.190413877 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:42:21 localhost podman[89782]: 2025-12-02 08:42:21.221961376 +0000 UTC m=+0.216338610 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12) Dec 2 03:42:21 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:42:21 localhost podman[89783]: 2025-12-02 08:42:21.254877063 +0000 UTC m=+0.244611006 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=) Dec 2 03:42:21 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:42:22 localhost systemd[1]: tmp-crun.KO6jR3.mount: Deactivated successfully. Dec 2 03:42:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:42:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:42:24 localhost podman[89852]: 2025-12-02 08:42:24.068591545 +0000 UTC m=+0.070974442 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:42:24 localhost podman[89853]: 2025-12-02 08:42:24.129686735 +0000 UTC m=+0.129959877 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:42:24 localhost podman[89852]: 2025-12-02 08:42:24.150266354 +0000 UTC m=+0.152649271 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Dec 2 03:42:24 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:42:24 localhost podman[89853]: 2025-12-02 08:42:24.541715362 +0000 UTC m=+0.541988444 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z) Dec 2 03:42:24 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:42:27 localhost podman[89902]: 2025-12-02 08:42:27.128675336 +0000 UTC m=+0.137834158 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git) Dec 2 03:42:27 localhost systemd[1]: tmp-crun.JLvdCv.mount: Deactivated successfully. Dec 2 03:42:27 localhost podman[89903]: 2025-12-02 08:42:27.17849291 +0000 UTC m=+0.174687785 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4) Dec 2 03:42:27 localhost podman[89902]: 2025-12-02 08:42:27.189795446 +0000 UTC m=+0.198954248 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044) Dec 2 03:42:27 localhost podman[89903]: 2025-12-02 08:42:27.197740569 +0000 UTC m=+0.193935444 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:42:27 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:42:27 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:42:31 localhost systemd[1]: tmp-crun.BD5WiU.mount: Deactivated successfully. Dec 2 03:42:31 localhost podman[89950]: 2025-12-02 08:42:31.087715792 +0000 UTC m=+0.087133867 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:42:31 localhost podman[89950]: 2025-12-02 08:42:31.101871215 +0000 UTC m=+0.101289280 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:42:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:42:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:42:34 localhost systemd[1]: tmp-crun.xmbQHP.mount: Deactivated successfully. Dec 2 03:42:34 localhost podman[89970]: 2025-12-02 08:42:34.087212517 +0000 UTC m=+0.087400655 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:42:34 localhost podman[89970]: 2025-12-02 08:42:34.123681993 +0000 UTC m=+0.123870141 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:42:34 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:42:38 localhost sshd[89990]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:42:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:42:46 localhost systemd[1]: tmp-crun.959v3f.mount: Deactivated successfully. Dec 2 03:42:46 localhost podman[89992]: 2025-12-02 08:42:46.107173577 +0000 UTC m=+0.101475585 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible) Dec 2 03:42:46 localhost podman[89992]: 2025-12-02 08:42:46.291834818 +0000 UTC m=+0.286136806 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:42:46 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:42:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:42:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:42:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:42:52 localhost systemd[1]: tmp-crun.AD6H7d.mount: Deactivated successfully. Dec 2 03:42:52 localhost podman[90023]: 2025-12-02 08:42:52.084781846 +0000 UTC m=+0.081972359 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:42:52 localhost podman[90021]: 2025-12-02 08:42:52.13885602 +0000 UTC m=+0.141263093 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:42:52 localhost podman[90023]: 2025-12-02 08:42:52.144343878 +0000 UTC m=+0.141534351 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 2 03:42:52 localhost podman[90021]: 2025-12-02 08:42:52.144811742 +0000 UTC m=+0.147218785 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container) Dec 2 03:42:52 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:42:52 localhost podman[90022]: 2025-12-02 08:42:52.190268314 +0000 UTC m=+0.187186929 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:42:52 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:42:52 localhost podman[90022]: 2025-12-02 08:42:52.220772326 +0000 UTC m=+0.217690941 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044) Dec 2 03:42:52 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:42:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:42:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:42:55 localhost podman[90094]: 2025-12-02 08:42:55.092919107 +0000 UTC m=+0.091192911 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:42:55 localhost podman[90093]: 2025-12-02 08:42:55.151742747 +0000 UTC m=+0.154239650 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_compute) Dec 2 03:42:55 localhost podman[90093]: 2025-12-02 08:42:55.171990236 +0000 UTC m=+0.174487159 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true) Dec 2 03:42:55 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:42:55 localhost podman[90094]: 2025-12-02 08:42:55.528432732 +0000 UTC m=+0.526706566 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:42:55 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:42:58 localhost systemd[1]: tmp-crun.wxT6Bq.mount: Deactivated successfully. Dec 2 03:42:58 localhost podman[90145]: 2025-12-02 08:42:58.085591895 +0000 UTC m=+0.089040156 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:42:58 localhost podman[90144]: 2025-12-02 08:42:58.056824075 +0000 UTC m=+0.066942479 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 2 03:42:58 localhost podman[90145]: 2025-12-02 08:42:58.132921343 +0000 UTC m=+0.136369624 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:42:58 localhost podman[90144]: 2025-12-02 08:42:58.140805965 +0000 UTC m=+0.150924339 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:42:58 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:42:58 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:43:02 localhost systemd[1]: tmp-crun.AzLlhR.mount: Deactivated successfully. Dec 2 03:43:02 localhost podman[90192]: 2025-12-02 08:43:02.089164143 +0000 UTC m=+0.091321214 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:43:02 localhost podman[90192]: 2025-12-02 08:43:02.10211396 +0000 UTC m=+0.104271081 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:43:02 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:43:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:43:05 localhost podman[90235]: 2025-12-02 08:43:05.092382763 +0000 UTC m=+0.092362897 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid) Dec 2 03:43:05 localhost podman[90235]: 2025-12-02 08:43:05.133212953 +0000 UTC m=+0.133193097 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3) Dec 2 03:43:05 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:43:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:43:17 localhost systemd[1]: tmp-crun.HZbyQn.mount: Deactivated successfully. Dec 2 03:43:17 localhost podman[90254]: 2025-12-02 08:43:17.089524153 +0000 UTC m=+0.096078622 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:43:17 localhost podman[90254]: 2025-12-02 08:43:17.29728958 +0000 UTC m=+0.303844039 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team) Dec 2 03:43:17 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:43:23 localhost systemd[1]: tmp-crun.u2XG8I.mount: Deactivated successfully. Dec 2 03:43:23 localhost podman[90361]: 2025-12-02 08:43:23.094950391 +0000 UTC m=+0.095769451 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Dec 2 03:43:23 localhost podman[90362]: 2025-12-02 08:43:23.147425167 +0000 UTC m=+0.148564336 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:43:23 localhost podman[90361]: 2025-12-02 08:43:23.15603123 +0000 UTC m=+0.156850260 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Dec 2 03:43:23 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:43:23 localhost podman[90362]: 2025-12-02 08:43:23.199895913 +0000 UTC m=+0.201035042 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 2 03:43:23 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:43:23 localhost podman[90363]: 2025-12-02 08:43:23.246658044 +0000 UTC m=+0.242626095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:43:23 localhost podman[90363]: 2025-12-02 08:43:23.296762746 +0000 UTC m=+0.292730777 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 2 03:43:23 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:43:26 localhost systemd[1]: tmp-crun.yJOU4t.mount: Deactivated successfully. Dec 2 03:43:26 localhost podman[90436]: 2025-12-02 08:43:26.096190491 +0000 UTC m=+0.094989008 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:43:26 localhost podman[90436]: 2025-12-02 08:43:26.129702416 +0000 UTC m=+0.128500873 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, config_id=tripleo_step5, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute) Dec 2 03:43:26 localhost systemd[1]: tmp-crun.66OldF.mount: Deactivated successfully. Dec 2 03:43:26 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:43:26 localhost podman[90437]: 2025-12-02 08:43:26.143668554 +0000 UTC m=+0.139159180 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:43:26 localhost podman[90437]: 2025-12-02 08:43:26.517744989 +0000 UTC m=+0.513235595 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Dec 2 03:43:26 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:43:29 localhost systemd[1]: tmp-crun.zLhFX7.mount: Deactivated successfully. Dec 2 03:43:29 localhost podman[90489]: 2025-12-02 08:43:29.092946304 +0000 UTC m=+0.093271666 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:43:29 localhost podman[90488]: 2025-12-02 08:43:29.072825008 +0000 UTC m=+0.080039740 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:43:29 localhost podman[90489]: 2025-12-02 08:43:29.146592565 +0000 UTC m=+0.146917917 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:43:29 localhost podman[90488]: 2025-12-02 08:43:29.15589551 +0000 UTC m=+0.163110202 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, architecture=x86_64) Dec 2 03:43:29 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:43:29 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:43:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:43:33 localhost recover_tripleo_nova_virtqemud[90542]: 61907 Dec 2 03:43:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:43:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:43:33 localhost podman[90535]: 2025-12-02 08:43:33.0784987 +0000 UTC m=+0.081803364 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:43:33 localhost podman[90535]: 2025-12-02 08:43:33.088128805 +0000 UTC m=+0.091433519 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, container_name=collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:43:33 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:43:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:43:36 localhost systemd[1]: tmp-crun.k256vW.mount: Deactivated successfully. Dec 2 03:43:36 localhost podman[90557]: 2025-12-02 08:43:36.080585375 +0000 UTC m=+0.081073511 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, release=1761123044, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:43:36 localhost podman[90557]: 2025-12-02 08:43:36.094878783 +0000 UTC m=+0.095366959 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:43:36 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:43:48 localhost systemd[1]: tmp-crun.2dhiCB.mount: Deactivated successfully. Dec 2 03:43:48 localhost podman[90575]: 2025-12-02 08:43:48.081515202 +0000 UTC m=+0.085000133 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:43:48 localhost podman[90575]: 2025-12-02 08:43:48.284317986 +0000 UTC m=+0.287802917 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:43:48 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:43:54 localhost podman[90604]: 2025-12-02 08:43:54.069630261 +0000 UTC m=+0.079249075 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:43:54 localhost podman[90604]: 2025-12-02 08:43:54.079823894 +0000 UTC m=+0.089442708 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:43:54 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:43:54 localhost podman[90605]: 2025-12-02 08:43:54.128294717 +0000 UTC m=+0.132189466 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com) Dec 2 03:43:54 localhost podman[90605]: 2025-12-02 08:43:54.186702864 +0000 UTC m=+0.190597633 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-type=git) Dec 2 03:43:54 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:43:54 localhost podman[90606]: 2025-12-02 08:43:54.18819145 +0000 UTC m=+0.189798809 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:43:54 localhost podman[90606]: 2025-12-02 08:43:54.273977364 +0000 UTC m=+0.275584703 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible) Dec 2 03:43:54 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:43:57 localhost podman[90673]: 2025-12-02 08:43:57.049767906 +0000 UTC m=+0.062437271 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:43:57 localhost podman[90673]: 2025-12-02 08:43:57.095699982 +0000 UTC m=+0.108369317 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64) Dec 2 03:43:57 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:43:57 localhost podman[90674]: 2025-12-02 08:43:57.173696348 +0000 UTC m=+0.179336259 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, distribution-scope=public, version=17.1.12) Dec 2 03:43:57 localhost podman[90674]: 2025-12-02 08:43:57.50582358 +0000 UTC m=+0.511463451 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:43:57 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:44:00 localhost podman[90723]: 2025-12-02 08:44:00.087890945 +0000 UTC m=+0.086955991 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 2 03:44:00 localhost podman[90722]: 2025-12-02 08:44:00.066107809 +0000 UTC m=+0.070333403 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:44:00 localhost podman[90723]: 2025-12-02 08:44:00.133931714 +0000 UTC m=+0.132996780 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, tcib_managed=true) Dec 2 03:44:00 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:44:00 localhost podman[90722]: 2025-12-02 08:44:00.149754338 +0000 UTC m=+0.153979912 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:44:00 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:44:04 localhost podman[90770]: 2025-12-02 08:44:04.065762027 +0000 UTC m=+0.072657144 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12) Dec 2 03:44:04 localhost podman[90770]: 2025-12-02 08:44:04.09789539 +0000 UTC m=+0.104790457 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Dec 2 03:44:04 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:44:07 localhost podman[90790]: 2025-12-02 08:44:07.074536727 +0000 UTC m=+0.079631408 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:44:07 localhost podman[90790]: 2025-12-02 08:44:07.111918921 +0000 UTC m=+0.117013622 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12) Dec 2 03:44:07 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:44:19 localhost systemd[1]: tmp-crun.Bud94f.mount: Deactivated successfully. Dec 2 03:44:19 localhost podman[90810]: 2025-12-02 08:44:19.082707652 +0000 UTC m=+0.084682902 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:44:19 localhost podman[90810]: 2025-12-02 08:44:19.308006136 +0000 UTC m=+0.309981386 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1) Dec 2 03:44:19 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:44:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:44:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:44:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:44:25 localhost systemd[1]: tmp-crun.Cbnaly.mount: Deactivated successfully. Dec 2 03:44:25 localhost podman[90918]: 2025-12-02 08:44:25.102747579 +0000 UTC m=+0.096928476 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:44:25 localhost systemd[1]: tmp-crun.ZDHwnQ.mount: Deactivated successfully. Dec 2 03:44:25 localhost podman[90919]: 2025-12-02 08:44:25.149803229 +0000 UTC m=+0.141853541 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:44:25 localhost podman[90918]: 2025-12-02 08:44:25.165167749 +0000 UTC m=+0.159348616 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64) Dec 2 03:44:25 localhost podman[90917]: 2025-12-02 08:44:25.175754293 +0000 UTC m=+0.175187200 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:44:25 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:44:25 localhost podman[90919]: 2025-12-02 08:44:25.182722366 +0000 UTC m=+0.174772678 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:44:25 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:44:25 localhost podman[90917]: 2025-12-02 08:44:25.214867559 +0000 UTC m=+0.214300526 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git) Dec 2 03:44:25 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:44:28 localhost systemd[1]: tmp-crun.l2efAu.mount: Deactivated successfully. Dec 2 03:44:28 localhost podman[90987]: 2025-12-02 08:44:28.075356452 +0000 UTC m=+0.081804744 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Dec 2 03:44:28 localhost podman[90987]: 2025-12-02 08:44:28.12888864 +0000 UTC m=+0.135336982 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 03:44:28 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:44:28 localhost podman[90988]: 2025-12-02 08:44:28.130877101 +0000 UTC m=+0.132250088 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1761123044) Dec 2 03:44:28 localhost podman[90988]: 2025-12-02 08:44:28.538924186 +0000 UTC m=+0.540297183 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 03:44:28 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:44:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:44:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:44:31 localhost podman[91034]: 2025-12-02 08:44:31.044275883 +0000 UTC m=+0.052046174 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:44:31 localhost podman[91034]: 2025-12-02 08:44:31.067856144 +0000 UTC m=+0.075626445 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:44:31 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:44:31 localhost systemd[1]: tmp-crun.pC9fKe.mount: Deactivated successfully. Dec 2 03:44:31 localhost podman[91035]: 2025-12-02 08:44:31.110461638 +0000 UTC m=+0.115659990 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:44:31 localhost podman[91035]: 2025-12-02 08:44:31.160823009 +0000 UTC m=+0.166021371 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:44:31 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:44:35 localhost podman[91082]: 2025-12-02 08:44:35.064179581 +0000 UTC m=+0.070788087 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 2 03:44:35 localhost podman[91082]: 2025-12-02 08:44:35.078898141 +0000 UTC m=+0.085506637 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd) Dec 2 03:44:35 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:44:38 localhost podman[91102]: 2025-12-02 08:44:38.053653931 +0000 UTC m=+0.061470722 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container) Dec 2 03:44:38 localhost podman[91102]: 2025-12-02 08:44:38.060740398 +0000 UTC m=+0.068557179 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-11-18T23:44:13Z) Dec 2 03:44:38 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:44:47 localhost sshd[91121]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:44:50 localhost podman[91123]: 2025-12-02 08:44:50.079300333 +0000 UTC m=+0.084186977 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:44:50 localhost podman[91123]: 2025-12-02 08:44:50.270944247 +0000 UTC m=+0.275830891 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Dec 2 03:44:50 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:44:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:44:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:44:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:44:56 localhost systemd[1]: tmp-crun.QCdG2E.mount: Deactivated successfully. Dec 2 03:44:56 localhost podman[91151]: 2025-12-02 08:44:56.087813818 +0000 UTC m=+0.088945463 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Dec 2 03:44:56 localhost podman[91151]: 2025-12-02 08:44:56.121490398 +0000 UTC m=+0.122622043 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 2 03:44:56 localhost systemd[1]: tmp-crun.yPTtOg.mount: Deactivated successfully. Dec 2 03:44:56 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:44:56 localhost podman[91153]: 2025-12-02 08:44:56.142002956 +0000 UTC m=+0.136202139 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:44:56 localhost podman[91153]: 2025-12-02 08:44:56.171946762 +0000 UTC m=+0.166145915 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:44:56 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:44:56 localhost podman[91152]: 2025-12-02 08:44:56.178033738 +0000 UTC m=+0.175463820 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:44:56 localhost podman[91152]: 2025-12-02 08:44:56.257169289 +0000 UTC m=+0.254599361 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 2 03:44:56 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:44:58 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:44:59 localhost recover_tripleo_nova_virtqemud[91230]: 61907 Dec 2 03:44:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:44:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:44:59 localhost systemd[1]: tmp-crun.c7tfKd.mount: Deactivated successfully. Dec 2 03:44:59 localhost podman[91222]: 2025-12-02 08:44:59.095396033 +0000 UTC m=+0.095634346 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:44:59 localhost systemd[1]: tmp-crun.ySagJi.mount: Deactivated successfully. Dec 2 03:44:59 localhost podman[91223]: 2025-12-02 08:44:59.152869781 +0000 UTC m=+0.149547925 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:44:59 localhost podman[91222]: 2025-12-02 08:44:59.175287906 +0000 UTC m=+0.175526169 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:44:59 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:44:59 localhost podman[91223]: 2025-12-02 08:44:59.543909429 +0000 UTC m=+0.540587603 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:44:59 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:45:02 localhost systemd[1]: tmp-crun.Fgr0BD.mount: Deactivated successfully. Dec 2 03:45:02 localhost podman[91276]: 2025-12-02 08:45:02.066130032 +0000 UTC m=+0.067618698 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:45:02 localhost podman[91276]: 2025-12-02 08:45:02.109814508 +0000 UTC m=+0.111303194 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Dec 2 03:45:02 localhost systemd[1]: tmp-crun.C0Gina.mount: Deactivated successfully. Dec 2 03:45:02 localhost podman[91275]: 2025-12-02 08:45:02.119661039 +0000 UTC m=+0.125836919 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ovn_metadata_agent) Dec 2 03:45:02 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:45:02 localhost podman[91275]: 2025-12-02 08:45:02.171764592 +0000 UTC m=+0.177940452 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:45:02 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:45:06 localhost podman[91322]: 2025-12-02 08:45:06.058356141 +0000 UTC m=+0.064732000 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 2 03:45:06 localhost podman[91322]: 2025-12-02 08:45:06.065698055 +0000 UTC m=+0.072073904 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 2 03:45:06 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:45:09 localhost podman[91342]: 2025-12-02 08:45:09.087545358 +0000 UTC m=+0.089279791 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, distribution-scope=public) Dec 2 03:45:09 localhost podman[91342]: 2025-12-02 08:45:09.128002796 +0000 UTC m=+0.129737229 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:45:09 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:45:14 localhost sshd[91361]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:45:21 localhost podman[91363]: 2025-12-02 08:45:21.09026337 +0000 UTC m=+0.086574859 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:45:21 localhost podman[91363]: 2025-12-02 08:45:21.32637669 +0000 UTC m=+0.322688129 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044) Dec 2 03:45:21 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:45:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:45:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:45:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:45:27 localhost systemd[1]: tmp-crun.YmR5fk.mount: Deactivated successfully. Dec 2 03:45:27 localhost podman[91455]: 2025-12-02 08:45:27.091665761 +0000 UTC m=+0.092344625 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:45:27 localhost systemd[1]: tmp-crun.k5iqQM.mount: Deactivated successfully. Dec 2 03:45:27 localhost podman[91454]: 2025-12-02 08:45:27.141862466 +0000 UTC m=+0.142689104 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git) Dec 2 03:45:27 localhost podman[91456]: 2025-12-02 08:45:27.181601181 +0000 UTC m=+0.176718095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git) Dec 2 03:45:27 localhost podman[91455]: 2025-12-02 08:45:27.212976431 +0000 UTC m=+0.213655375 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 2 03:45:27 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:45:27 localhost podman[91454]: 2025-12-02 08:45:27.233335754 +0000 UTC m=+0.234162392 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:45:27 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:45:27 localhost podman[91456]: 2025-12-02 08:45:27.285037055 +0000 UTC m=+0.280153979 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:45:27 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:45:30 localhost podman[91540]: 2025-12-02 08:45:30.079240355 +0000 UTC m=+0.083764953 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:45:30 localhost podman[91540]: 2025-12-02 08:45:30.136726364 +0000 UTC m=+0.141250962 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:45:30 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:45:30 localhost podman[91541]: 2025-12-02 08:45:30.143900212 +0000 UTC m=+0.143371865 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_migration_target) Dec 2 03:45:30 localhost podman[91541]: 2025-12-02 08:45:30.510499734 +0000 UTC m=+0.509971437 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:45:30 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:45:33 localhost systemd[1]: tmp-crun.mOyPVs.mount: Deactivated successfully. Dec 2 03:45:33 localhost podman[91590]: 2025-12-02 08:45:33.074718682 +0000 UTC m=+0.082106162 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:45:33 localhost podman[91591]: 2025-12-02 08:45:33.054724311 +0000 UTC m=+0.063123572 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:45:33 localhost podman[91590]: 2025-12-02 08:45:33.114794218 +0000 UTC m=+0.122181668 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:45:33 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:45:33 localhost podman[91591]: 2025-12-02 08:45:33.138876305 +0000 UTC m=+0.147275606 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:45:33 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:45:33 localhost sshd[91638]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:45:37 localhost systemd[1]: tmp-crun.cLtTZK.mount: Deactivated successfully. Dec 2 03:45:37 localhost podman[91639]: 2025-12-02 08:45:37.0705291 +0000 UTC m=+0.079343107 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:45:37 localhost podman[91639]: 2025-12-02 08:45:37.084322402 +0000 UTC m=+0.093136399 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:45:37 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:45:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:45:40 localhost podman[91659]: 2025-12-02 08:45:40.058300871 +0000 UTC m=+0.068865777 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 03:45:40 localhost podman[91659]: 2025-12-02 08:45:40.091496566 +0000 UTC m=+0.102061462 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:45:40 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:45:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:45:52 localhost systemd[1]: tmp-crun.cBIbNM.mount: Deactivated successfully. Dec 2 03:45:52 localhost podman[91682]: 2025-12-02 08:45:52.090766542 +0000 UTC m=+0.090045314 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:45:52 localhost podman[91682]: 2025-12-02 08:45:52.30227334 +0000 UTC m=+0.301552092 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:45:52 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:45:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:45:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:45:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:45:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:45:58 localhost recover_tripleo_nova_virtqemud[91731]: 61907 Dec 2 03:45:58 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:45:58 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:45:58 localhost systemd[1]: tmp-crun.SoeqUj.mount: Deactivated successfully. Dec 2 03:45:58 localhost podman[91714]: 2025-12-02 08:45:58.057676569 +0000 UTC m=+0.057073207 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:45:58 localhost podman[91712]: 2025-12-02 08:45:58.083309633 +0000 UTC m=+0.084766743 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Dec 2 03:45:58 localhost podman[91712]: 2025-12-02 08:45:58.094857406 +0000 UTC m=+0.096314556 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 2 03:45:58 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:45:58 localhost podman[91714]: 2025-12-02 08:45:58.140598425 +0000 UTC m=+0.139995043 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044) Dec 2 03:45:58 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:45:58 localhost podman[91713]: 2025-12-02 08:45:58.194581236 +0000 UTC m=+0.192686464 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Dec 2 03:45:58 localhost podman[91713]: 2025-12-02 08:45:58.221755867 +0000 UTC m=+0.219861135 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:45:58 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:46:01 localhost podman[91786]: 2025-12-02 08:46:01.064565414 +0000 UTC m=+0.070672233 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:46:01 localhost podman[91785]: 2025-12-02 08:46:01.123100734 +0000 UTC m=+0.128450050 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044) Dec 2 03:46:01 localhost podman[91785]: 2025-12-02 08:46:01.175106315 +0000 UTC m=+0.180455601 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:46:01 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:46:01 localhost podman[91786]: 2025-12-02 08:46:01.445486703 +0000 UTC m=+0.451593462 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 03:46:01 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:46:04 localhost podman[91832]: 2025-12-02 08:46:04.08554683 +0000 UTC m=+0.086357711 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent) Dec 2 03:46:04 localhost podman[91833]: 2025-12-02 08:46:04.121077417 +0000 UTC m=+0.120962720 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:46:04 localhost podman[91832]: 2025-12-02 08:46:04.144921596 +0000 UTC m=+0.145732467 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z) Dec 2 03:46:04 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:46:04 localhost podman[91833]: 2025-12-02 08:46:04.171962613 +0000 UTC m=+0.171847876 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 03:46:04 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:46:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:46:08 localhost podman[91882]: 2025-12-02 08:46:08.083128141 +0000 UTC m=+0.086286709 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, release=1761123044) Dec 2 03:46:08 localhost podman[91882]: 2025-12-02 08:46:08.117174693 +0000 UTC m=+0.120333281 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:46:08 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:46:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:46:11 localhost systemd[1]: tmp-crun.0ylcIi.mount: Deactivated successfully. Dec 2 03:46:11 localhost podman[91902]: 2025-12-02 08:46:11.087351175 +0000 UTC m=+0.089908251 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid) Dec 2 03:46:11 localhost podman[91902]: 2025-12-02 08:46:11.125195722 +0000 UTC m=+0.127752788 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:46:11 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:46:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:46:23 localhost systemd[1]: tmp-crun.0FDrUl.mount: Deactivated successfully. Dec 2 03:46:23 localhost podman[91921]: 2025-12-02 08:46:23.091624544 +0000 UTC m=+0.096649177 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:46:23 localhost podman[91921]: 2025-12-02 08:46:23.302287796 +0000 UTC m=+0.307312349 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:46:23 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:46:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:46:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:46:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:46:28 localhost podman[92028]: 2025-12-02 08:46:28.494861863 +0000 UTC m=+0.095972706 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4) Dec 2 03:46:28 localhost podman[92028]: 2025-12-02 08:46:28.522630863 +0000 UTC m=+0.123741716 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:46:28 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:46:28 localhost podman[92027]: 2025-12-02 08:46:28.544255373 +0000 UTC m=+0.147175721 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:46:28 localhost podman[92027]: 2025-12-02 08:46:28.552677551 +0000 UTC m=+0.155597889 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:46:28 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:46:28 localhost podman[92029]: 2025-12-02 08:46:28.598256085 +0000 UTC m=+0.195000354 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:46:28 localhost podman[92029]: 2025-12-02 08:46:28.623851768 +0000 UTC m=+0.220596067 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Dec 2 03:46:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:46:32 localhost systemd[1]: tmp-crun.Vbyhlc.mount: Deactivated successfully. Dec 2 03:46:32 localhost podman[92133]: 2025-12-02 08:46:32.102558603 +0000 UTC m=+0.099650009 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 2 03:46:32 localhost podman[92132]: 2025-12-02 08:46:32.136516511 +0000 UTC m=+0.135460424 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044) Dec 2 03:46:32 localhost podman[92132]: 2025-12-02 08:46:32.164822466 +0000 UTC m=+0.163766399 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12) Dec 2 03:46:32 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:46:32 localhost podman[92133]: 2025-12-02 08:46:32.551700688 +0000 UTC m=+0.548792034 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:46:32 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:46:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:46:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:46:35 localhost podman[92193]: 2025-12-02 08:46:35.094034515 +0000 UTC m=+0.094764038 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:46:35 localhost podman[92193]: 2025-12-02 08:46:35.137923397 +0000 UTC m=+0.138652930 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 2 03:46:35 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:46:35 localhost podman[92194]: 2025-12-02 08:46:35.162558421 +0000 UTC m=+0.158648833 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:46:35 localhost podman[92194]: 2025-12-02 08:46:35.211035963 +0000 UTC m=+0.207126445 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:46:35 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:46:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:46:39 localhost podman[92242]: 2025-12-02 08:46:39.072965317 +0000 UTC m=+0.076947735 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:46:39 localhost podman[92242]: 2025-12-02 08:46:39.111884527 +0000 UTC m=+0.115866955 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:46:39 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:46:42 localhost podman[92263]: 2025-12-02 08:46:42.064506012 +0000 UTC m=+0.068302150 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, container_name=iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:46:42 localhost podman[92263]: 2025-12-02 08:46:42.073775085 +0000 UTC m=+0.077571263 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:46:42 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:46:50 localhost sshd[92282]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:46:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:46:54 localhost systemd[1]: tmp-crun.5flj7s.mount: Deactivated successfully. Dec 2 03:46:54 localhost podman[92284]: 2025-12-02 08:46:54.086199825 +0000 UTC m=+0.086094994 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:46:54 localhost podman[92284]: 2025-12-02 08:46:54.301241382 +0000 UTC m=+0.301136621 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 03:46:54 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:46:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:46:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:46:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:46:59 localhost podman[92314]: 2025-12-02 08:46:59.048504441 +0000 UTC m=+0.056640553 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:12:45Z, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi) Dec 2 03:46:59 localhost podman[92314]: 2025-12-02 08:46:59.092760344 +0000 UTC m=+0.100896806 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:46:59 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:46:59 localhost podman[92312]: 2025-12-02 08:46:59.107147174 +0000 UTC m=+0.117179595 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:46:59 localhost podman[92312]: 2025-12-02 08:46:59.11585297 +0000 UTC m=+0.125885391 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:46:59 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:46:59 localhost podman[92313]: 2025-12-02 08:46:59.129202459 +0000 UTC m=+0.135682500 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:46:59 localhost podman[92313]: 2025-12-02 08:46:59.174837934 +0000 UTC m=+0.181317975 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:46:59 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:47:03 localhost podman[92384]: 2025-12-02 08:47:03.066305061 +0000 UTC m=+0.069527257 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:47:03 localhost systemd[1]: tmp-crun.BAwf0D.mount: Deactivated successfully. Dec 2 03:47:03 localhost podman[92383]: 2025-12-02 08:47:03.139670495 +0000 UTC m=+0.144722577 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:47:03 localhost podman[92383]: 2025-12-02 08:47:03.164012009 +0000 UTC m=+0.169064041 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:47:03 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:47:03 localhost podman[92384]: 2025-12-02 08:47:03.435274104 +0000 UTC m=+0.438496310 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target) Dec 2 03:47:03 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:47:06 localhost podman[92433]: 2025-12-02 08:47:06.083732738 +0000 UTC m=+0.084803955 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, distribution-scope=public) Dec 2 03:47:06 localhost podman[92433]: 2025-12-02 08:47:06.106755192 +0000 UTC m=+0.107826389 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64) Dec 2 03:47:06 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully. Dec 2 03:47:06 localhost podman[92432]: 2025-12-02 08:47:06.183326723 +0000 UTC m=+0.186665429 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:47:06 localhost podman[92432]: 2025-12-02 08:47:06.232972182 +0000 UTC m=+0.236310888 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true) Dec 2 03:47:06 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:47:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:47:10 localhost podman[92479]: 2025-12-02 08:47:10.06304039 +0000 UTC m=+0.073475328 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:47:10 localhost podman[92479]: 2025-12-02 08:47:10.077859072 +0000 UTC m=+0.088293940 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:47:10 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:47:13 localhost podman[92502]: 2025-12-02 08:47:13.075027281 +0000 UTC m=+0.081746661 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3) Dec 2 03:47:13 localhost podman[92502]: 2025-12-02 08:47:13.084962305 +0000 UTC m=+0.091681665 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:47:13 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:47:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:47:25 localhost recover_tripleo_nova_virtqemud[92528]: 61907 Dec 2 03:47:25 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:47:25 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:47:25 localhost systemd[1]: tmp-crun.SgRY0A.mount: Deactivated successfully. Dec 2 03:47:25 localhost podman[92521]: 2025-12-02 08:47:25.087478099 +0000 UTC m=+0.089906890 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:47:25 localhost podman[92521]: 2025-12-02 08:47:25.306856689 +0000 UTC m=+0.309285470 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:47:25 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:47:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:47:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:47:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:47:30 localhost podman[92552]: 2025-12-02 08:47:30.081628189 +0000 UTC m=+0.086356693 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:47:30 localhost systemd[1]: tmp-crun.ST722N.mount: Deactivated successfully. Dec 2 03:47:30 localhost podman[92553]: 2025-12-02 08:47:30.138673193 +0000 UTC m=+0.141187269 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:47:30 localhost podman[92552]: 2025-12-02 08:47:30.166244566 +0000 UTC m=+0.170973090 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:47:30 localhost systemd[1]: tmp-crun.yY6ws1.mount: Deactivated successfully. Dec 2 03:47:30 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:47:30 localhost podman[92554]: 2025-12-02 08:47:30.188718273 +0000 UTC m=+0.186907096 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.) Dec 2 03:47:30 localhost podman[92553]: 2025-12-02 08:47:30.220086512 +0000 UTC m=+0.222600568 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:47:30 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:47:30 localhost podman[92554]: 2025-12-02 08:47:30.242034414 +0000 UTC m=+0.240223057 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:47:30 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:47:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:47:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:47:33 localhost systemd[1]: tmp-crun.7Kg4MV.mount: Deactivated successfully. Dec 2 03:47:33 localhost podman[92701]: 2025-12-02 08:47:33.668319044 +0000 UTC m=+0.075284573 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:47:33 localhost podman[92702]: 2025-12-02 08:47:33.727267627 +0000 UTC m=+0.133344998 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:47:33 localhost podman[92701]: 2025-12-02 08:47:33.750630682 +0000 UTC m=+0.157596271 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_compute) Dec 2 03:47:33 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:47:34 localhost podman[92702]: 2025-12-02 08:47:34.128760315 +0000 UTC m=+0.534837646 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:47:34 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:47:34 localhost podman[92803]: Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.226893867 +0000 UTC m=+0.072666453 container create 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64) Dec 2 03:47:34 localhost systemd[1]: Started libpod-conmon-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope. Dec 2 03:47:34 localhost systemd[1]: Started libcrun container. Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.296278339 +0000 UTC m=+0.142050895 container init 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.197875269 +0000 UTC m=+0.043647835 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.306272164 +0000 UTC m=+0.152044720 container start 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=) Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.306395278 +0000 UTC m=+0.152167834 container attach 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1763362218, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph) Dec 2 03:47:34 localhost practical_sinoussi[92819]: 167 167 Dec 2 03:47:34 localhost systemd[1]: libpod-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope: Deactivated successfully. Dec 2 03:47:34 localhost podman[92803]: 2025-12-02 08:47:34.312572587 +0000 UTC m=+0.158345163 container died 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Dec 2 03:47:34 localhost podman[92824]: 2025-12-02 08:47:34.382884487 +0000 UTC m=+0.062339137 container remove 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, name=rhceph, RELEASE=main, version=7, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 03:47:34 localhost systemd[1]: libpod-conmon-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope: Deactivated successfully. Dec 2 03:47:34 localhost podman[92845]: Dec 2 03:47:34 localhost podman[92845]: 2025-12-02 08:47:34.565872133 +0000 UTC m=+0.052191817 container create 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, version=7, name=rhceph) Dec 2 03:47:34 localhost systemd[1]: Started libpod-conmon-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope. Dec 2 03:47:34 localhost systemd[1]: Started libcrun container. Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost podman[92845]: 2025-12-02 08:47:34.623925698 +0000 UTC m=+0.110245342 container init 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7) Dec 2 03:47:34 localhost podman[92845]: 2025-12-02 08:47:34.630288903 +0000 UTC m=+0.116608547 container start 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7) Dec 2 03:47:34 localhost podman[92845]: 2025-12-02 08:47:34.630466829 +0000 UTC m=+0.116786493 container attach 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218) Dec 2 03:47:34 localhost podman[92845]: 2025-12-02 08:47:34.545676716 +0000 UTC m=+0.031996390 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-3703270082c346e915c0c2c64e6bbc9751038af99dd0c78722d640cc272922a2-merged.mount: Deactivated successfully. Dec 2 03:47:35 localhost pensive_mccarthy[92860]: [ Dec 2 03:47:35 localhost pensive_mccarthy[92860]: { Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "available": false, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "ceph_device": false, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "lsm_data": {}, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "lvs": [], Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "path": "/dev/sr0", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "rejected_reasons": [ Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "Insufficient space (<5GB)", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "Has a FileSystem" Dec 2 03:47:35 localhost pensive_mccarthy[92860]: ], Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "sys_api": { Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "actuators": null, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "device_nodes": "sr0", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "human_readable_size": "482.00 KB", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "id_bus": "ata", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "model": "QEMU DVD-ROM", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "nr_requests": "2", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "partitions": {}, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "path": "/dev/sr0", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "removable": "1", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "rev": "2.5+", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "ro": "0", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "rotational": "1", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "sas_address": "", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "sas_device_handle": "", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "scheduler_mode": "mq-deadline", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "sectors": 0, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "sectorsize": "2048", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "size": 493568.0, Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "support_discard": "0", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "type": "disk", Dec 2 03:47:35 localhost pensive_mccarthy[92860]: "vendor": "QEMU" Dec 2 03:47:35 localhost pensive_mccarthy[92860]: } Dec 2 03:47:35 localhost pensive_mccarthy[92860]: } Dec 2 03:47:35 localhost pensive_mccarthy[92860]: ] Dec 2 03:47:35 localhost systemd[1]: libpod-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope: Deactivated successfully. Dec 2 03:47:35 localhost podman[92845]: 2025-12-02 08:47:35.552339921 +0000 UTC m=+1.038659615 container died 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z) Dec 2 03:47:35 localhost systemd[1]: tmp-crun.PYKnMo.mount: Deactivated successfully. Dec 2 03:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903-merged.mount: Deactivated successfully. Dec 2 03:47:35 localhost podman[94894]: 2025-12-02 08:47:35.65924854 +0000 UTC m=+0.092942023 container remove 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 03:47:35 localhost systemd[1]: libpod-conmon-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope: Deactivated successfully. Dec 2 03:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:47:37 localhost podman[94908]: 2025-12-02 08:47:37.073324575 +0000 UTC m=+0.075627734 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:47:37 localhost podman[94908]: 2025-12-02 08:47:37.116907497 +0000 UTC m=+0.119210616 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:47:37 localhost systemd[1]: tmp-crun.c9TKR0.mount: Deactivated successfully. Dec 2 03:47:37 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:47:37 localhost podman[94909]: 2025-12-02 08:47:37.137440396 +0000 UTC m=+0.138720024 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 2 03:47:37 localhost podman[94909]: 2025-12-02 08:47:37.187795085 +0000 UTC m=+0.189074704 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, release=1761123044) Dec 2 03:47:37 localhost podman[94909]: unhealthy Dec 2 03:47:37 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:47:37 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:47:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:47:41 localhost podman[94974]: 2025-12-02 08:47:41.083307256 +0000 UTC m=+0.084763783 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:47:41 localhost podman[94974]: 2025-12-02 08:47:41.114301934 +0000 UTC m=+0.115758471 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:47:41 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:47:41 localhost sshd[94994]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:47:44 localhost podman[94995]: 2025-12-02 08:47:44.072021035 +0000 UTC m=+0.073083656 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, tcib_managed=true) Dec 2 03:47:44 localhost podman[94995]: 2025-12-02 08:47:44.108064407 +0000 UTC m=+0.109126988 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, vendor=Red Hat, Inc.) Dec 2 03:47:44 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:47:56 localhost systemd[1]: tmp-crun.6nf2HH.mount: Deactivated successfully. Dec 2 03:47:56 localhost podman[95015]: 2025-12-02 08:47:56.086957431 +0000 UTC m=+0.092848990 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 03:47:56 localhost podman[95015]: 2025-12-02 08:47:56.292951731 +0000 UTC m=+0.298843300 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:47:56 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:48:01 localhost podman[95044]: 2025-12-02 08:48:01.081014408 +0000 UTC m=+0.080393860 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, distribution-scope=public) Dec 2 03:48:01 localhost podman[95044]: 2025-12-02 08:48:01.107789316 +0000 UTC m=+0.107168758 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:48:01 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:48:01 localhost podman[95045]: 2025-12-02 08:48:01.184944406 +0000 UTC m=+0.181471271 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true) Dec 2 03:48:01 localhost podman[95045]: 2025-12-02 08:48:01.215519962 +0000 UTC m=+0.212046817 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 03:48:01 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:48:01 localhost podman[95043]: 2025-12-02 08:48:01.233681997 +0000 UTC m=+0.236601607 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:48:01 localhost podman[95043]: 2025-12-02 08:48:01.263977373 +0000 UTC m=+0.266897013 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:48:01 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:48:04 localhost podman[95115]: 2025-12-02 08:48:04.071240434 +0000 UTC m=+0.079967936 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:48:04 localhost podman[95115]: 2025-12-02 08:48:04.101932502 +0000 UTC m=+0.110660034 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute) Dec 2 03:48:04 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:48:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:48:05 localhost podman[95141]: 2025-12-02 08:48:05.074711701 +0000 UTC m=+0.079704288 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute) Dec 2 03:48:05 localhost podman[95141]: 2025-12-02 08:48:05.447025957 +0000 UTC m=+0.452019004 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:48:05 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:48:08 localhost systemd[1]: tmp-crun.YEQ0u3.mount: Deactivated successfully. Dec 2 03:48:08 localhost podman[95166]: 2025-12-02 08:48:08.049329649 +0000 UTC m=+0.060830822 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:48:08 localhost podman[95166]: 2025-12-02 08:48:08.100851525 +0000 UTC m=+0.112352748 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, url=https://www.redhat.com, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1) Dec 2 03:48:08 localhost podman[95166]: unhealthy Dec 2 03:48:08 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:08 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:48:08 localhost podman[95165]: 2025-12-02 08:48:08.103996552 +0000 UTC m=+0.113958387 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:48:08 localhost podman[95165]: 2025-12-02 08:48:08.192013123 +0000 UTC m=+0.201974948 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:48:08 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully. Dec 2 03:48:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:48:12 localhost podman[95215]: 2025-12-02 08:48:12.075262479 +0000 UTC m=+0.082239056 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:48:12 localhost podman[95215]: 2025-12-02 08:48:12.084192172 +0000 UTC m=+0.091168739 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:48:12 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:48:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:48:15 localhost podman[95237]: 2025-12-02 08:48:15.05192063 +0000 UTC m=+0.057858191 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=) Dec 2 03:48:15 localhost podman[95237]: 2025-12-02 08:48:15.058248783 +0000 UTC m=+0.064186364 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:48:15 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:48:27 localhost podman[95256]: 2025-12-02 08:48:27.328150506 +0000 UTC m=+0.081829253 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=) Dec 2 03:48:27 localhost podman[95256]: 2025-12-02 08:48:27.524949624 +0000 UTC m=+0.278628411 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:48:27 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:48:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:48:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:48:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:48:32 localhost podman[95286]: 2025-12-02 08:48:32.066603626 +0000 UTC m=+0.069179697 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:48:32 localhost podman[95285]: 2025-12-02 08:48:32.091826507 +0000 UTC m=+0.095274884 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team) Dec 2 03:48:32 localhost podman[95285]: 2025-12-02 08:48:32.099251424 +0000 UTC m=+0.102699831 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:48:32 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:48:32 localhost podman[95286]: 2025-12-02 08:48:32.134864573 +0000 UTC m=+0.137440644 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 03:48:32 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:48:32 localhost podman[95287]: 2025-12-02 08:48:32.183220762 +0000 UTC m=+0.179532812 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:48:32 localhost podman[95287]: 2025-12-02 08:48:32.21160772 +0000 UTC m=+0.207919800 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:48:32 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:48:35 localhost podman[95354]: 2025-12-02 08:48:35.078116952 +0000 UTC m=+0.081438193 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute) Dec 2 03:48:35 localhost podman[95354]: 2025-12-02 08:48:35.136851717 +0000 UTC m=+0.140172888 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_compute, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5) Dec 2 03:48:35 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:48:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:48:36 localhost systemd[1]: tmp-crun.bbrLaw.mount: Deactivated successfully. Dec 2 03:48:36 localhost podman[95379]: 2025-12-02 08:48:36.09470397 +0000 UTC m=+0.090091426 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 2 03:48:36 localhost podman[95379]: 2025-12-02 08:48:36.494093284 +0000 UTC m=+0.489480740 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:48:36 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:48:39 localhost systemd[1]: tmp-crun.tuNFF1.mount: Deactivated successfully. Dec 2 03:48:39 localhost podman[95401]: 2025-12-02 08:48:39.084418831 +0000 UTC m=+0.085852757 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller) Dec 2 03:48:39 localhost podman[95401]: 2025-12-02 08:48:39.132509392 +0000 UTC m=+0.133943288 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:48:39 localhost podman[95401]: unhealthy Dec 2 03:48:39 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:39 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:48:39 localhost podman[95400]: 2025-12-02 08:48:39.137443383 +0000 UTC m=+0.138224009 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:48:39 localhost podman[95400]: 2025-12-02 08:48:39.236519672 +0000 UTC m=+0.237300298 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:48:39 localhost podman[95400]: unhealthy Dec 2 03:48:39 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:39 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:48:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:48:43 localhost podman[95568]: 2025-12-02 08:48:43.049082677 +0000 UTC m=+0.055499069 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:48:43 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:48:43 localhost podman[95568]: 2025-12-02 08:48:43.091016809 +0000 UTC m=+0.097433241 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 2 03:48:43 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:48:43 localhost recover_tripleo_nova_virtqemud[95590]: 61907 Dec 2 03:48:43 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:48:43 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:48:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:48:46 localhost podman[95591]: 2025-12-02 08:48:46.065799463 +0000 UTC m=+0.067861516 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4) Dec 2 03:48:46 localhost podman[95591]: 2025-12-02 08:48:46.105916599 +0000 UTC m=+0.107978702 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=) Dec 2 03:48:46 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:48:58 localhost systemd[1]: tmp-crun.4oZ36J.mount: Deactivated successfully. Dec 2 03:48:58 localhost podman[95609]: 2025-12-02 08:48:58.075571919 +0000 UTC m=+0.082885366 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:48:58 localhost podman[95609]: 2025-12-02 08:48:58.257960907 +0000 UTC m=+0.265274324 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, distribution-scope=public) Dec 2 03:48:58 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:49:00 localhost sshd[95638]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:49:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:49:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:49:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:49:03 localhost podman[95642]: 2025-12-02 08:49:03.069011626 +0000 UTC m=+0.067240308 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:49:03 localhost systemd[1]: tmp-crun.MlcdYJ.mount: Deactivated successfully. Dec 2 03:49:03 localhost podman[95642]: 2025-12-02 08:49:03.114710014 +0000 UTC m=+0.112938686 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:03 localhost podman[95641]: 2025-12-02 08:49:03.117031464 +0000 UTC m=+0.116696979 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:03 localhost podman[95641]: 2025-12-02 08:49:03.137231533 +0000 UTC m=+0.136897038 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 2 03:49:03 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:49:03 localhost podman[95640]: 2025-12-02 08:49:03.172336796 +0000 UTC m=+0.172144955 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, release=1761123044, container_name=logrotate_crond, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:49:03 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:49:03 localhost podman[95640]: 2025-12-02 08:49:03.20682627 +0000 UTC m=+0.206634419 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 2 03:49:03 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:49:06 localhost podman[95712]: 2025-12-02 08:49:06.075640293 +0000 UTC m=+0.079809192 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64) Dec 2 03:49:06 localhost podman[95712]: 2025-12-02 08:49:06.124015892 +0000 UTC m=+0.128184811 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:49:06 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:49:07 localhost podman[95739]: 2025-12-02 08:49:07.083797343 +0000 UTC m=+0.088393653 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:49:07 localhost podman[95739]: 2025-12-02 08:49:07.476791812 +0000 UTC m=+0.481388052 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:49:07 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:49:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:49:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:49:10 localhost podman[95762]: 2025-12-02 08:49:10.070341957 +0000 UTC m=+0.075665015 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Dec 2 03:49:10 localhost podman[95762]: 2025-12-02 08:49:10.110768653 +0000 UTC m=+0.116091721 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:49:10 localhost podman[95762]: unhealthy Dec 2 03:49:10 localhost systemd[1]: tmp-crun.J2llWv.mount: Deactivated successfully. Dec 2 03:49:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:49:10 localhost podman[95763]: 2025-12-02 08:49:10.132211949 +0000 UTC m=+0.133442112 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:49:10 localhost podman[95763]: 2025-12-02 08:49:10.146269079 +0000 UTC m=+0.147499222 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true) Dec 2 03:49:10 localhost podman[95763]: unhealthy Dec 2 03:49:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:49:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:49:14 localhost podman[95801]: 2025-12-02 08:49:14.081631707 +0000 UTC m=+0.082624347 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 2 03:49:14 localhost podman[95801]: 2025-12-02 08:49:14.116839804 +0000 UTC m=+0.117832444 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:49:14 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:49:17 localhost podman[95821]: 2025-12-02 08:49:17.054886524 +0000 UTC m=+0.066626979 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:49:17 localhost podman[95821]: 2025-12-02 08:49:17.093530026 +0000 UTC m=+0.105270491 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 2 03:49:17 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:49:29 localhost systemd[1]: tmp-crun.pYuLdU.mount: Deactivated successfully. Dec 2 03:49:29 localhost podman[95840]: 2025-12-02 08:49:29.068745363 +0000 UTC m=+0.059562782 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1) Dec 2 03:49:29 localhost podman[95840]: 2025-12-02 08:49:29.252492123 +0000 UTC m=+0.243309502 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:49:29 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:49:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:49:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:49:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:49:34 localhost systemd[1]: tmp-crun.VAtNaH.mount: Deactivated successfully. Dec 2 03:49:34 localhost podman[95871]: 2025-12-02 08:49:34.120319749 +0000 UTC m=+0.080540834 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:49:34 localhost podman[95870]: 2025-12-02 08:49:34.089677812 +0000 UTC m=+0.055489508 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:49:34 localhost podman[95872]: 2025-12-02 08:49:34.153650937 +0000 UTC m=+0.112961505 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:49:34 localhost podman[95871]: 2025-12-02 08:49:34.165897913 +0000 UTC m=+0.126119068 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:49:34 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:49:34 localhost podman[95872]: 2025-12-02 08:49:34.181644314 +0000 UTC m=+0.140954822 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 03:49:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:49:34 localhost podman[95870]: 2025-12-02 08:49:34.220024117 +0000 UTC m=+0.185835853 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond) Dec 2 03:49:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:49:37 localhost podman[95942]: 2025-12-02 08:49:37.088385206 +0000 UTC m=+0.087799676 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:49:37 localhost podman[95942]: 2025-12-02 08:49:37.138945402 +0000 UTC m=+0.138359882 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:49:37 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:49:38 localhost systemd[1]: tmp-crun.M9V0l2.mount: Deactivated successfully. Dec 2 03:49:38 localhost podman[95969]: 2025-12-02 08:49:38.083548289 +0000 UTC m=+0.087967361 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:38 localhost podman[95969]: 2025-12-02 08:49:38.474886187 +0000 UTC m=+0.479305309 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:38 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:49:41 localhost systemd[1]: tmp-crun.XMhF7w.mount: Deactivated successfully. Dec 2 03:49:41 localhost podman[95991]: 2025-12-02 08:49:41.079637945 +0000 UTC m=+0.087611341 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64) Dec 2 03:49:41 localhost podman[95992]: 2025-12-02 08:49:41.124542318 +0000 UTC m=+0.131776051 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z) Dec 2 03:49:41 localhost podman[95991]: 2025-12-02 08:49:41.143161457 +0000 UTC m=+0.151134913 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:49:41 localhost podman[95991]: unhealthy Dec 2 03:49:41 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:41 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:49:41 localhost podman[95992]: 2025-12-02 08:49:41.168088649 +0000 UTC m=+0.175322372 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:41 localhost podman[95992]: unhealthy Dec 2 03:49:41 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:41 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:49:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:49:45 localhost podman[96108]: 2025-12-02 08:49:45.082092016 +0000 UTC m=+0.089129836 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:49:45 localhost podman[96108]: 2025-12-02 08:49:45.123837532 +0000 UTC m=+0.130875332 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:49:45 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:49:48 localhost podman[96128]: 2025-12-02 08:49:48.056478717 +0000 UTC m=+0.063462831 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, tcib_managed=true, container_name=iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:49:48 localhost podman[96128]: 2025-12-02 08:49:48.069163995 +0000 UTC m=+0.076148119 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git) Dec 2 03:49:48 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:50:00 localhost podman[96147]: 2025-12-02 08:50:00.079484527 +0000 UTC m=+0.078818351 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:50:00 localhost podman[96147]: 2025-12-02 08:50:00.274674147 +0000 UTC m=+0.274008061 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Dec 2 03:50:00 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:50:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:50:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:50:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:50:05 localhost podman[96176]: 2025-12-02 08:50:05.077317588 +0000 UTC m=+0.082195234 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Dec 2 03:50:05 localhost podman[96176]: 2025-12-02 08:50:05.089342235 +0000 UTC m=+0.094219941 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=logrotate_crond, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64) Dec 2 03:50:05 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:50:05 localhost podman[96178]: 2025-12-02 08:50:05.137699355 +0000 UTC m=+0.139293111 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:50:05 localhost podman[96177]: 2025-12-02 08:50:05.185131375 +0000 UTC m=+0.189112694 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:05 localhost podman[96177]: 2025-12-02 08:50:05.213215694 +0000 UTC m=+0.217197013 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:50:05 localhost podman[96178]: 2025-12-02 08:50:05.214780242 +0000 UTC m=+0.216373948 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:50:05 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:50:05 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:50:08 localhost podman[96248]: 2025-12-02 08:50:08.075859947 +0000 UTC m=+0.078714418 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:50:08 localhost podman[96248]: 2025-12-02 08:50:08.103831792 +0000 UTC m=+0.106686253 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12) Dec 2 03:50:08 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:50:09 localhost systemd[1]: tmp-crun.770tkU.mount: Deactivated successfully. Dec 2 03:50:09 localhost podman[96274]: 2025-12-02 08:50:09.06770596 +0000 UTC m=+0.079686179 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 03:50:09 localhost podman[96274]: 2025-12-02 08:50:09.461911334 +0000 UTC m=+0.473891573 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=) Dec 2 03:50:09 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:50:12 localhost podman[96297]: 2025-12-02 08:50:12.072371516 +0000 UTC m=+0.080499483 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:50:12 localhost podman[96297]: 2025-12-02 08:50:12.08884597 +0000 UTC m=+0.096973927 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public) Dec 2 03:50:12 localhost podman[96297]: unhealthy Dec 2 03:50:12 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:12 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:50:12 localhost systemd[1]: tmp-crun.d5lPCW.mount: Deactivated successfully. Dec 2 03:50:12 localhost podman[96298]: 2025-12-02 08:50:12.176704627 +0000 UTC m=+0.179010056 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:50:12 localhost podman[96298]: 2025-12-02 08:50:12.192332175 +0000 UTC m=+0.194637594 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:50:12 localhost podman[96298]: unhealthy Dec 2 03:50:12 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:12 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:50:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:50:16 localhost systemd[1]: tmp-crun.vmNpRx.mount: Deactivated successfully. Dec 2 03:50:16 localhost podman[96338]: 2025-12-02 08:50:16.069415312 +0000 UTC m=+0.075760958 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:50:16 localhost podman[96338]: 2025-12-02 08:50:16.083040149 +0000 UTC m=+0.089385785 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:50:16 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:50:19 localhost podman[96359]: 2025-12-02 08:50:19.066633462 +0000 UTC m=+0.075511530 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:50:19 localhost podman[96359]: 2025-12-02 08:50:19.075817693 +0000 UTC m=+0.084695771 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:19 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:50:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:50:25 localhost recover_tripleo_nova_virtqemud[96379]: 61907 Dec 2 03:50:25 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:50:25 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:50:31 localhost systemd[1]: tmp-crun.82FxN6.mount: Deactivated successfully. Dec 2 03:50:31 localhost podman[96380]: 2025-12-02 08:50:31.086995733 +0000 UTC m=+0.092834811 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:31 localhost podman[96380]: 2025-12-02 08:50:31.277372095 +0000 UTC m=+0.283211233 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:50:31 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:50:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:50:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:50:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:50:36 localhost podman[96410]: 2025-12-02 08:50:36.089782815 +0000 UTC m=+0.091775247 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 2 03:50:36 localhost podman[96410]: 2025-12-02 08:50:36.0997304 +0000 UTC m=+0.101722842 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, tcib_managed=true, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:50:36 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:50:36 localhost podman[96412]: 2025-12-02 08:50:36.140803876 +0000 UTC m=+0.137668261 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Dec 2 03:50:36 localhost podman[96411]: 2025-12-02 08:50:36.062601715 +0000 UTC m=+0.067770194 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:50:36 localhost podman[96412]: 2025-12-02 08:50:36.173963951 +0000 UTC m=+0.170828316 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true) Dec 2 03:50:36 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:50:36 localhost podman[96411]: 2025-12-02 08:50:36.197101768 +0000 UTC m=+0.202270287 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 2 03:50:36 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:50:39 localhost podman[96482]: 2025-12-02 08:50:39.061538445 +0000 UTC m=+0.065216995 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-type=git, release=1761123044, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step5, io.buildah.version=1.41.4) Dec 2 03:50:39 localhost podman[96482]: 2025-12-02 08:50:39.118919901 +0000 UTC m=+0.122598411 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 2 03:50:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:50:40 localhost podman[96508]: 2025-12-02 08:50:40.079690103 +0000 UTC m=+0.084891358 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 2 03:50:40 localhost podman[96508]: 2025-12-02 08:50:40.385806444 +0000 UTC m=+0.391007689 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:50:40 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:50:42 localhost podman[96547]: 2025-12-02 08:50:42.826069731 +0000 UTC m=+0.072136838 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:50:42 localhost podman[96547]: 2025-12-02 08:50:42.837705146 +0000 UTC m=+0.083772183 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible) Dec 2 03:50:42 localhost podman[96547]: unhealthy Dec 2 03:50:42 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:42 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:50:42 localhost podman[96545]: 2025-12-02 08:50:42.885495958 +0000 UTC m=+0.130375358 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:50:42 localhost podman[96545]: 2025-12-02 08:50:42.897590408 +0000 UTC m=+0.142469788 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:42 localhost podman[96545]: unhealthy Dec 2 03:50:42 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:42 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:50:43 localhost podman[96670]: 2025-12-02 08:50:43.581520244 +0000 UTC m=+0.059497040 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Dec 2 03:50:43 localhost podman[96670]: 2025-12-02 08:50:43.672740133 +0000 UTC m=+0.150716939 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph, version=7, ceph=True, vendor=Red Hat, Inc.) Dec 2 03:50:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:50:47 localhost podman[96815]: 2025-12-02 08:50:47.057781923 +0000 UTC m=+0.065855806 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-type=git, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:47 localhost podman[96815]: 2025-12-02 08:50:47.064497527 +0000 UTC m=+0.072571400 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:50:47 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:50:50 localhost systemd[1]: tmp-crun.IlDVRv.mount: Deactivated successfully. Dec 2 03:50:50 localhost podman[96835]: 2025-12-02 08:50:50.084794292 +0000 UTC m=+0.087801795 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:50:50 localhost podman[96835]: 2025-12-02 08:50:50.120089192 +0000 UTC m=+0.123096765 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:50:50 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:51:02 localhost podman[96854]: 2025-12-02 08:51:02.087550476 +0000 UTC m=+0.091794618 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:51:02 localhost podman[96854]: 2025-12-02 08:51:02.309982848 +0000 UTC m=+0.314226990 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:51:02 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:51:07 localhost systemd[1]: tmp-crun.R9ao15.mount: Deactivated successfully. Dec 2 03:51:07 localhost podman[96884]: 2025-12-02 08:51:07.08158644 +0000 UTC m=+0.077807079 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Dec 2 03:51:07 localhost podman[96884]: 2025-12-02 08:51:07.12015489 +0000 UTC m=+0.116375539 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git) Dec 2 03:51:07 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:51:07 localhost podman[96885]: 2025-12-02 08:51:07.143951718 +0000 UTC m=+0.139237779 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:51:07 localhost podman[96886]: 2025-12-02 08:51:07.154090998 +0000 UTC m=+0.145223023 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:51:07 localhost podman[96885]: 2025-12-02 08:51:07.181878218 +0000 UTC m=+0.177164309 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:51:07 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:51:07 localhost podman[96886]: 2025-12-02 08:51:07.21398814 +0000 UTC m=+0.205120175 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12) Dec 2 03:51:07 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:51:10 localhost podman[96953]: 2025-12-02 08:51:10.088361581 +0000 UTC m=+0.085465485 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc.) Dec 2 03:51:10 localhost podman[96953]: 2025-12-02 08:51:10.14422922 +0000 UTC m=+0.141333094 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1) Dec 2 03:51:10 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:51:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:51:11 localhost podman[96979]: 2025-12-02 08:51:11.071652351 +0000 UTC m=+0.075732727 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:51:11 localhost podman[96979]: 2025-12-02 08:51:11.459017267 +0000 UTC m=+0.463097673 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container) Dec 2 03:51:11 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:51:13 localhost systemd[1]: tmp-crun.eVyJKp.mount: Deactivated successfully. Dec 2 03:51:13 localhost podman[97002]: 2025-12-02 08:51:13.084503586 +0000 UTC m=+0.081135282 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:51:13 localhost podman[97003]: 2025-12-02 08:51:13.14577087 +0000 UTC m=+0.139952281 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:51:13 localhost podman[97002]: 2025-12-02 08:51:13.174262641 +0000 UTC m=+0.170894387 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:51:13 localhost podman[97002]: unhealthy Dec 2 03:51:13 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:13 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:51:13 localhost podman[97003]: 2025-12-02 08:51:13.187718053 +0000 UTC m=+0.181899484 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:51:13 localhost podman[97003]: unhealthy Dec 2 03:51:13 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:13 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:51:13 localhost sshd[97042]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:51:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:51:18 localhost systemd[1]: tmp-crun.80E22n.mount: Deactivated successfully. Dec 2 03:51:18 localhost podman[97044]: 2025-12-02 08:51:18.091681743 +0000 UTC m=+0.091880381 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 2 03:51:18 localhost podman[97044]: 2025-12-02 08:51:18.107091344 +0000 UTC m=+0.107289972 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:51:18 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:51:21 localhost systemd[1]: tmp-crun.0qo9ek.mount: Deactivated successfully. Dec 2 03:51:21 localhost podman[97064]: 2025-12-02 08:51:21.077071861 +0000 UTC m=+0.082122962 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:51:21 localhost podman[97064]: 2025-12-02 08:51:21.088966786 +0000 UTC m=+0.094017857 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 2 03:51:21 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:51:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:51:33 localhost podman[97084]: 2025-12-02 08:51:33.069944131 +0000 UTC m=+0.076607723 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:51:33 localhost podman[97084]: 2025-12-02 08:51:33.307488826 +0000 UTC m=+0.314152378 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:51:33 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:51:38 localhost systemd[1]: tmp-crun.RIVuUS.mount: Deactivated successfully. Dec 2 03:51:38 localhost podman[97115]: 2025-12-02 08:51:38.086140975 +0000 UTC m=+0.090490528 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=logrotate_crond, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:51:38 localhost podman[97115]: 2025-12-02 08:51:38.121837327 +0000 UTC m=+0.126186830 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:51:38 localhost systemd[1]: tmp-crun.jxRd6Y.mount: Deactivated successfully. Dec 2 03:51:38 localhost podman[97117]: 2025-12-02 08:51:38.133287137 +0000 UTC m=+0.132625258 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi) Dec 2 03:51:38 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:51:38 localhost podman[97116]: 2025-12-02 08:51:38.195250362 +0000 UTC m=+0.195017236 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:51:38 localhost podman[97117]: 2025-12-02 08:51:38.218848663 +0000 UTC m=+0.218186764 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 03:51:38 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:51:38 localhost podman[97116]: 2025-12-02 08:51:38.255958379 +0000 UTC m=+0.255725203 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z) Dec 2 03:51:38 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:51:41 localhost systemd[1]: tmp-crun.HOAGpz.mount: Deactivated successfully. Dec 2 03:51:41 localhost podman[97188]: 2025-12-02 08:51:41.065156998 +0000 UTC m=+0.072870980 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, container_name=nova_compute, url=https://www.redhat.com) Dec 2 03:51:41 localhost podman[97188]: 2025-12-02 08:51:41.093049861 +0000 UTC m=+0.100763883 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:51:41 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:51:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:51:42 localhost systemd[1]: tmp-crun.Fccq3R.mount: Deactivated successfully. Dec 2 03:51:42 localhost podman[97215]: 2025-12-02 08:51:42.089563466 +0000 UTC m=+0.094201892 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:51:42 localhost podman[97215]: 2025-12-02 08:51:42.527116897 +0000 UTC m=+0.531755323 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:51:42 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:51:44 localhost podman[97240]: 2025-12-02 08:51:44.083956258 +0000 UTC m=+0.084046451 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4) Dec 2 03:51:44 localhost podman[97240]: 2025-12-02 08:51:44.123919 +0000 UTC m=+0.124009193 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:51:44 localhost podman[97240]: unhealthy Dec 2 03:51:44 localhost systemd[1]: tmp-crun.PyrZ7M.mount: Deactivated successfully. Dec 2 03:51:44 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:44 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:51:44 localhost podman[97239]: 2025-12-02 08:51:44.143746186 +0000 UTC m=+0.148378469 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Dec 2 03:51:44 localhost podman[97239]: 2025-12-02 08:51:44.185939196 +0000 UTC m=+0.190571429 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:51:44 localhost podman[97239]: unhealthy Dec 2 03:51:44 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:44 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:51:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:51:49 localhost podman[97357]: 2025-12-02 08:51:49.086666388 +0000 UTC m=+0.087786386 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:51:49 localhost podman[97357]: 2025-12-02 08:51:49.098022326 +0000 UTC m=+0.099142374 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:51:49 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:51:52 localhost systemd[1]: tmp-crun.zX6zu2.mount: Deactivated successfully. Dec 2 03:51:52 localhost podman[97377]: 2025-12-02 08:51:52.061832174 +0000 UTC m=+0.069017202 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Dec 2 03:51:52 localhost podman[97377]: 2025-12-02 08:51:52.096621067 +0000 UTC m=+0.103806035 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:51:52 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:52:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:52:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:52:04 localhost recover_tripleo_nova_virtqemud[97398]: 61907 Dec 2 03:52:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:52:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:52:04 localhost podman[97396]: 2025-12-02 08:52:04.095401247 +0000 UTC m=+0.092702675 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:52:04 localhost podman[97396]: 2025-12-02 08:52:04.305803282 +0000 UTC m=+0.303104700 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:52:04 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:52:09 localhost podman[97428]: 2025-12-02 08:52:09.094193278 +0000 UTC m=+0.093643415 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team) Dec 2 03:52:09 localhost podman[97430]: 2025-12-02 08:52:09.14949676 +0000 UTC m=+0.141334914 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git) Dec 2 03:52:09 localhost podman[97428]: 2025-12-02 08:52:09.162007742 +0000 UTC m=+0.161457919 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z) Dec 2 03:52:09 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:52:09 localhost podman[97430]: 2025-12-02 08:52:09.182803397 +0000 UTC m=+0.174641501 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Dec 2 03:52:09 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:52:09 localhost podman[97429]: 2025-12-02 08:52:09.239358118 +0000 UTC m=+0.236276158 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:52:09 localhost podman[97429]: 2025-12-02 08:52:09.275136721 +0000 UTC m=+0.272054761 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=) Dec 2 03:52:09 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:52:12 localhost podman[97500]: 2025-12-02 08:52:12.078805062 +0000 UTC m=+0.080098480 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4) Dec 2 03:52:12 localhost podman[97500]: 2025-12-02 08:52:12.129911744 +0000 UTC m=+0.131205212 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute) Dec 2 03:52:12 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:52:13 localhost podman[97526]: 2025-12-02 08:52:13.081718822 +0000 UTC m=+0.083442833 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:52:13 localhost podman[97526]: 2025-12-02 08:52:13.498837448 +0000 UTC m=+0.500561499 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Dec 2 03:52:13 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:52:15 localhost systemd[1]: tmp-crun.OxRZuW.mount: Deactivated successfully. Dec 2 03:52:15 localhost podman[97551]: 2025-12-02 08:52:15.087921575 +0000 UTC m=+0.089284691 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 03:52:15 localhost podman[97551]: 2025-12-02 08:52:15.134032406 +0000 UTC m=+0.135395492 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:52:15 localhost podman[97551]: unhealthy Dec 2 03:52:15 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:15 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:52:15 localhost podman[97550]: 2025-12-02 08:52:15.13482157 +0000 UTC m=+0.139094585 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044) Dec 2 03:52:15 localhost podman[97550]: 2025-12-02 08:52:15.219160579 +0000 UTC m=+0.223433634 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:52:15 localhost podman[97550]: unhealthy Dec 2 03:52:15 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:15 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:52:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:52:20 localhost podman[97591]: 2025-12-02 08:52:20.069649255 +0000 UTC m=+0.077915804 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git) Dec 2 03:52:20 localhost podman[97591]: 2025-12-02 08:52:20.08093142 +0000 UTC m=+0.089197989 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Dec 2 03:52:20 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:52:23 localhost podman[97611]: 2025-12-02 08:52:23.086520995 +0000 UTC m=+0.090177469 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:52:23 localhost podman[97611]: 2025-12-02 08:52:23.126337003 +0000 UTC m=+0.129993347 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:52:23 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:52:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:52:35 localhost systemd[1]: tmp-crun.0PKPGu.mount: Deactivated successfully. Dec 2 03:52:35 localhost podman[97631]: 2025-12-02 08:52:35.098225111 +0000 UTC m=+0.103922899 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Dec 2 03:52:35 localhost podman[97631]: 2025-12-02 08:52:35.319684324 +0000 UTC m=+0.325382072 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com) Dec 2 03:52:35 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:52:40 localhost podman[97662]: 2025-12-02 08:52:40.094831556 +0000 UTC m=+0.089278551 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4) Dec 2 03:52:40 localhost systemd[1]: tmp-crun.x8WbSb.mount: Deactivated successfully. Dec 2 03:52:40 localhost podman[97662]: 2025-12-02 08:52:40.177308798 +0000 UTC m=+0.171755833 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Dec 2 03:52:40 localhost podman[97660]: 2025-12-02 08:52:40.189044618 +0000 UTC m=+0.189112725 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:52:40 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:52:40 localhost podman[97660]: 2025-12-02 08:52:40.200780067 +0000 UTC m=+0.200848174 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:40 localhost podman[97661]: 2025-12-02 08:52:40.159105352 +0000 UTC m=+0.154672191 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc.) Dec 2 03:52:40 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:52:40 localhost podman[97661]: 2025-12-02 08:52:40.241967845 +0000 UTC m=+0.237534654 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:52:40 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:52:43 localhost podman[97731]: 2025-12-02 08:52:43.075800738 +0000 UTC m=+0.079062309 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z) Dec 2 03:52:43 localhost podman[97731]: 2025-12-02 08:52:43.129813449 +0000 UTC m=+0.133074970 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:52:43 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:52:44 localhost systemd[1]: tmp-crun.rZZbDq.mount: Deactivated successfully. Dec 2 03:52:44 localhost podman[97757]: 2025-12-02 08:52:44.086537518 +0000 UTC m=+0.094634836 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:44 localhost podman[97757]: 2025-12-02 08:52:44.494162063 +0000 UTC m=+0.502259331 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:52:44 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:52:46 localhost systemd[1]: tmp-crun.ConUxC.mount: Deactivated successfully. Dec 2 03:52:46 localhost podman[97779]: 2025-12-02 08:52:46.097650551 +0000 UTC m=+0.092712476 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:52:46 localhost podman[97780]: 2025-12-02 08:52:46.144285836 +0000 UTC m=+0.137569658 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:52:46 localhost podman[97780]: 2025-12-02 08:52:46.162808223 +0000 UTC m=+0.156092095 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.12) Dec 2 03:52:46 localhost podman[97780]: unhealthy Dec 2 03:52:46 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:46 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:52:46 localhost podman[97779]: 2025-12-02 08:52:46.217823156 +0000 UTC m=+0.212885071 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4) Dec 2 03:52:46 localhost podman[97779]: unhealthy Dec 2 03:52:46 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:46 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:52:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:52:51 localhost podman[97895]: 2025-12-02 08:52:51.096489292 +0000 UTC m=+0.088795797 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:52:51 localhost podman[97895]: 2025-12-02 08:52:51.141063345 +0000 UTC m=+0.133369850 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:52:51 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:52:54 localhost podman[97915]: 2025-12-02 08:52:54.088948095 +0000 UTC m=+0.092814919 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:52:54 localhost podman[97915]: 2025-12-02 08:52:54.104863712 +0000 UTC m=+0.108730536 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1) Dec 2 03:52:54 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:53:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:53:06 localhost systemd[1]: tmp-crun.IN5p09.mount: Deactivated successfully. Dec 2 03:53:06 localhost podman[97933]: 2025-12-02 08:53:06.091760419 +0000 UTC m=+0.097051439 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:53:06 localhost podman[97933]: 2025-12-02 08:53:06.269915798 +0000 UTC m=+0.275206858 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 03:53:06 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:53:11 localhost systemd[1]: tmp-crun.odIgXI.mount: Deactivated successfully. Dec 2 03:53:11 localhost podman[97961]: 2025-12-02 08:53:11.077542613 +0000 UTC m=+0.086021762 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, release=1761123044) Dec 2 03:53:11 localhost systemd[1]: tmp-crun.vtOEff.mount: Deactivated successfully. Dec 2 03:53:11 localhost podman[97962]: 2025-12-02 08:53:11.114189733 +0000 UTC m=+0.114980707 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Dec 2 03:53:11 localhost podman[97961]: 2025-12-02 08:53:11.118760242 +0000 UTC m=+0.127239431 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044) Dec 2 03:53:11 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:53:11 localhost podman[97968]: 2025-12-02 08:53:11.199796881 +0000 UTC m=+0.193734686 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:53:11 localhost podman[97962]: 2025-12-02 08:53:11.223399993 +0000 UTC m=+0.224190917 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 2 03:53:11 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:53:11 localhost podman[97968]: 2025-12-02 08:53:11.255160254 +0000 UTC m=+0.249098029 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:53:11 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:53:14 localhost podman[98034]: 2025-12-02 08:53:14.081820198 +0000 UTC m=+0.083407342 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:53:14 localhost podman[98034]: 2025-12-02 08:53:14.109795124 +0000 UTC m=+0.111382278 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute) Dec 2 03:53:14 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:53:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:53:15 localhost podman[98062]: 2025-12-02 08:53:15.083172871 +0000 UTC m=+0.083144463 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Dec 2 03:53:15 localhost podman[98062]: 2025-12-02 08:53:15.458992474 +0000 UTC m=+0.458964076 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 2 03:53:15 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:53:17 localhost podman[98084]: 2025-12-02 08:53:17.077867502 +0000 UTC m=+0.083806034 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_metadata_agent) Dec 2 03:53:17 localhost podman[98084]: 2025-12-02 08:53:17.121092463 +0000 UTC m=+0.127030985 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 2 03:53:17 localhost podman[98084]: unhealthy Dec 2 03:53:17 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:17 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:53:17 localhost podman[98085]: 2025-12-02 08:53:17.140008362 +0000 UTC m=+0.142916601 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 03:53:17 localhost podman[98085]: 2025-12-02 08:53:17.15793147 +0000 UTC m=+0.160839749 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ovn_controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:53:17 localhost podman[98085]: unhealthy Dec 2 03:53:17 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:17 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:53:21 localhost sshd[98126]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:53:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:53:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:53:22 localhost recover_tripleo_nova_virtqemud[98135]: 61907 Dec 2 03:53:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:53:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:53:22 localhost podman[98128]: 2025-12-02 08:53:22.062104427 +0000 UTC m=+0.089838999 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, batch=17.1_20251118.1, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git) Dec 2 03:53:22 localhost podman[98128]: 2025-12-02 08:53:22.102009507 +0000 UTC m=+0.129744099 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:53:22 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:53:25 localhost podman[98150]: 2025-12-02 08:53:25.079575116 +0000 UTC m=+0.080385170 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:53:25 localhost podman[98150]: 2025-12-02 08:53:25.091063627 +0000 UTC m=+0.091873721 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:53:25 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:53:37 localhost podman[98170]: 2025-12-02 08:53:37.086707166 +0000 UTC m=+0.085896768 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git, url=https://www.redhat.com) Dec 2 03:53:37 localhost podman[98170]: 2025-12-02 08:53:37.314015868 +0000 UTC m=+0.313205450 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:53:37 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:53:42 localhost podman[98199]: 2025-12-02 08:53:42.087104937 +0000 UTC m=+0.087861148 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Dec 2 03:53:42 localhost systemd[1]: tmp-crun.qEPWfd.mount: Deactivated successfully. Dec 2 03:53:42 localhost podman[98200]: 2025-12-02 08:53:42.14999305 +0000 UTC m=+0.150292517 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 2 03:53:42 localhost podman[98201]: 2025-12-02 08:53:42.162983138 +0000 UTC m=+0.155668952 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:53:42 localhost podman[98199]: 2025-12-02 08:53:42.170290942 +0000 UTC m=+0.171047153 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Dec 2 03:53:42 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:53:42 localhost podman[98201]: 2025-12-02 08:53:42.19609358 +0000 UTC m=+0.188779384 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 2 03:53:42 localhost podman[98200]: 2025-12-02 08:53:42.212951226 +0000 UTC m=+0.213250683 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Dec 2 03:53:42 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:53:42 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:53:45 localhost systemd[1]: tmp-crun.x3VjrL.mount: Deactivated successfully. Dec 2 03:53:45 localhost podman[98270]: 2025-12-02 08:53:45.090538432 +0000 UTC m=+0.091403636 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:53:45 localhost podman[98270]: 2025-12-02 08:53:45.122966895 +0000 UTC m=+0.123832109 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, container_name=nova_compute) Dec 2 03:53:45 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:53:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:53:46 localhost podman[98297]: 2025-12-02 08:53:46.071616567 +0000 UTC m=+0.076063737 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:53:46 localhost podman[98297]: 2025-12-02 08:53:46.486482975 +0000 UTC m=+0.490930115 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:53:46 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:53:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:53:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:53:48 localhost podman[98322]: 2025-12-02 08:53:48.074052931 +0000 UTC m=+0.078006577 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-type=git) Dec 2 03:53:48 localhost podman[98321]: 2025-12-02 08:53:48.136379437 +0000 UTC m=+0.139391754 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:53:48 localhost podman[98322]: 2025-12-02 08:53:48.163697802 +0000 UTC m=+0.167651498 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:53:48 localhost podman[98322]: unhealthy Dec 2 03:53:48 localhost podman[98321]: 2025-12-02 08:53:48.175679569 +0000 UTC m=+0.178691866 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:53:48 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:48 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:53:48 localhost podman[98321]: unhealthy Dec 2 03:53:48 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:48 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:53:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:53:53 localhost systemd[1]: tmp-crun.7FJmnJ.mount: Deactivated successfully. Dec 2 03:53:53 localhost podman[98437]: 2025-12-02 08:53:53.096940779 +0000 UTC m=+0.100167405 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Dec 2 03:53:53 localhost podman[98437]: 2025-12-02 08:53:53.133433125 +0000 UTC m=+0.136659671 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:53:53 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:53:56 localhost podman[98456]: 2025-12-02 08:53:56.074123572 +0000 UTC m=+0.081251186 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:53:56 localhost podman[98456]: 2025-12-02 08:53:56.087929095 +0000 UTC m=+0.095056679 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:53:56 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:54:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:54:08 localhost systemd[1]: tmp-crun.aHF7KS.mount: Deactivated successfully. Dec 2 03:54:08 localhost podman[98475]: 2025-12-02 08:54:08.069395173 +0000 UTC m=+0.075074518 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:54:08 localhost podman[98475]: 2025-12-02 08:54:08.243103845 +0000 UTC m=+0.248783180 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:54:08 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:54:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:54:13 localhost systemd[1]: tmp-crun.XScZ6U.mount: Deactivated successfully. Dec 2 03:54:13 localhost podman[98504]: 2025-12-02 08:54:13.096544672 +0000 UTC m=+0.098377870 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044) Dec 2 03:54:13 localhost podman[98506]: 2025-12-02 08:54:13.15468953 +0000 UTC m=+0.149014658 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:54:13 localhost podman[98504]: 2025-12-02 08:54:13.179733116 +0000 UTC m=+0.181566254 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:54:13 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:54:13 localhost podman[98506]: 2025-12-02 08:54:13.207803135 +0000 UTC m=+0.202128263 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:54:13 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:54:13 localhost podman[98505]: 2025-12-02 08:54:13.131203972 +0000 UTC m=+0.129827392 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git) Dec 2 03:54:13 localhost podman[98505]: 2025-12-02 08:54:13.266056917 +0000 UTC m=+0.264680307 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 2 03:54:13 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:54:14 localhost systemd[1]: tmp-crun.DBOL5B.mount: Deactivated successfully. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:54:16 localhost podman[98578]: 2025-12-02 08:54:16.077304895 +0000 UTC m=+0.083033470 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_compute, version=17.1.12, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:54:16 localhost podman[98578]: 2025-12-02 08:54:16.136941599 +0000 UTC m=+0.142670194 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5) Dec 2 03:54:16 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:54:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:54:17 localhost podman[98607]: 2025-12-02 08:54:17.081362083 +0000 UTC m=+0.084226147 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1761123044) Dec 2 03:54:17 localhost podman[98607]: 2025-12-02 08:54:17.480751037 +0000 UTC m=+0.483615071 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 03:54:17 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:54:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:54:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:54:19 localhost podman[98631]: 2025-12-02 08:54:19.083515377 +0000 UTC m=+0.087318432 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=) Dec 2 03:54:19 localhost podman[98631]: 2025-12-02 08:54:19.124653895 +0000 UTC m=+0.128456940 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:54:19 localhost podman[98631]: unhealthy Dec 2 03:54:19 localhost systemd[1]: tmp-crun.FVX65C.mount: Deactivated successfully. Dec 2 03:54:19 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:19 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:54:19 localhost podman[98632]: 2025-12-02 08:54:19.154502928 +0000 UTC m=+0.156013032 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Dec 2 03:54:19 localhost podman[98632]: 2025-12-02 08:54:19.192933843 +0000 UTC m=+0.194443927 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4) Dec 2 03:54:19 localhost podman[98632]: unhealthy Dec 2 03:54:19 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:19 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:54:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:54:24 localhost podman[98671]: 2025-12-02 08:54:24.050967721 +0000 UTC m=+0.057528151 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Dec 2 03:54:24 localhost podman[98671]: 2025-12-02 08:54:24.088913491 +0000 UTC m=+0.095473941 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:54:24 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:54:27 localhost systemd[1]: tmp-crun.VMswzS.mount: Deactivated successfully. Dec 2 03:54:27 localhost podman[98692]: 2025-12-02 08:54:27.459487036 +0000 UTC m=+0.063213844 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, url=https://www.redhat.com, release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:54:27 localhost podman[98692]: 2025-12-02 08:54:27.471929087 +0000 UTC m=+0.075655945 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1761123044, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:54:27 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:54:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:54:35 localhost recover_tripleo_nova_virtqemud[98712]: 61907 Dec 2 03:54:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:54:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:54:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:54:39 localhost podman[98713]: 2025-12-02 08:54:39.084592676 +0000 UTC m=+0.090043914 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12) Dec 2 03:54:39 localhost podman[98713]: 2025-12-02 08:54:39.288892275 +0000 UTC m=+0.294343503 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:54:39 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:54:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:54:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:54:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:54:44 localhost podman[98743]: 2025-12-02 08:54:44.076584052 +0000 UTC m=+0.080495313 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 2 03:54:44 localhost podman[98745]: 2025-12-02 08:54:44.10170866 +0000 UTC m=+0.095905564 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Dec 2 03:54:44 localhost podman[98745]: 2025-12-02 08:54:44.134872715 +0000 UTC m=+0.129069659 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:12:45Z, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:54:44 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:54:44 localhost podman[98744]: 2025-12-02 08:54:44.201160132 +0000 UTC m=+0.198628256 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:54:44 localhost podman[98743]: 2025-12-02 08:54:44.212152268 +0000 UTC m=+0.216063529 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:54:44 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:54:44 localhost podman[98744]: 2025-12-02 08:54:44.262944432 +0000 UTC m=+0.260412556 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:54:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:54:45 localhost systemd[1]: tmp-crun.IfYqNZ.mount: Deactivated successfully. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:54:47 localhost podman[98817]: 2025-12-02 08:54:47.079136392 +0000 UTC m=+0.085701073 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public) Dec 2 03:54:47 localhost podman[98817]: 2025-12-02 08:54:47.112840542 +0000 UTC m=+0.119405253 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:47 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:54:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:54:48 localhost systemd[1]: tmp-crun.HTXVwM.mount: Deactivated successfully. Dec 2 03:54:48 localhost podman[98843]: 2025-12-02 08:54:48.08483749 +0000 UTC m=+0.087904860 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:54:48 localhost podman[98843]: 2025-12-02 08:54:48.46248505 +0000 UTC m=+0.465552470 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Dec 2 03:54:48 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:54:50 localhost podman[98868]: 2025-12-02 08:54:50.064171845 +0000 UTC m=+0.069227918 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:54:50 localhost podman[98868]: 2025-12-02 08:54:50.07708969 +0000 UTC m=+0.082145743 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12) Dec 2 03:54:50 localhost podman[98868]: unhealthy Dec 2 03:54:50 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:50 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:54:50 localhost systemd[1]: tmp-crun.KfgGO7.mount: Deactivated successfully. Dec 2 03:54:50 localhost podman[98867]: 2025-12-02 08:54:50.135652432 +0000 UTC m=+0.137818936 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, release=1761123044, vcs-type=git, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:54:50 localhost podman[98867]: 2025-12-02 08:54:50.170436235 +0000 UTC m=+0.172602769 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent) Dec 2 03:54:50 localhost podman[98867]: unhealthy Dec 2 03:54:50 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:50 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:54:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:54:55 localhost podman[98982]: 2025-12-02 08:54:55.09801819 +0000 UTC m=+0.097282426 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-collectd) Dec 2 03:54:55 localhost podman[98982]: 2025-12-02 08:54:55.110313257 +0000 UTC m=+0.109577543 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd) Dec 2 03:54:55 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:54:58 localhost podman[99003]: 2025-12-02 08:54:58.081640731 +0000 UTC m=+0.086851757 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Dec 2 03:54:58 localhost podman[99003]: 2025-12-02 08:54:58.094819495 +0000 UTC m=+0.100030531 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, release=1761123044, tcib_managed=true) Dec 2 03:54:58 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:55:04 localhost systemd[1]: session-28.scope: Deactivated successfully. Dec 2 03:55:04 localhost systemd[1]: session-28.scope: Consumed 7min 641ms CPU time. Dec 2 03:55:04 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Dec 2 03:55:04 localhost systemd-logind[760]: Removed session 28. Dec 2 03:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:55:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:55:10 localhost recover_tripleo_nova_virtqemud[99025]: 61907 Dec 2 03:55:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:55:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:55:10 localhost podman[99023]: 2025-12-02 08:55:10.097790292 +0000 UTC m=+0.097502643 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:55:10 localhost podman[99023]: 2025-12-02 08:55:10.29783614 +0000 UTC m=+0.297548541 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 2 03:55:10 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:55:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:55:15 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 03:55:15 localhost systemd[36014]: Activating special unit Exit the Session... Dec 2 03:55:15 localhost systemd[36014]: Removed slice User Background Tasks Slice. Dec 2 03:55:15 localhost systemd[36014]: Stopped target Main User Target. Dec 2 03:55:15 localhost systemd[36014]: Stopped target Basic System. Dec 2 03:55:15 localhost systemd[36014]: Stopped target Paths. Dec 2 03:55:15 localhost systemd[36014]: Stopped target Sockets. Dec 2 03:55:15 localhost systemd[36014]: Stopped target Timers. Dec 2 03:55:15 localhost systemd[36014]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 03:55:15 localhost systemd[36014]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:55:15 localhost systemd[36014]: Closed D-Bus User Message Bus Socket. Dec 2 03:55:15 localhost systemd[36014]: Stopped Create User's Volatile Files and Directories. Dec 2 03:55:15 localhost systemd[36014]: Removed slice User Application Slice. Dec 2 03:55:15 localhost systemd[36014]: Reached target Shutdown. Dec 2 03:55:15 localhost systemd[36014]: Finished Exit the Session. Dec 2 03:55:15 localhost systemd[36014]: Reached target Exit the Session. Dec 2 03:55:15 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 03:55:15 localhost systemd[1]: user@1003.service: Consumed 5.092s CPU time, read 0B from disk, written 7.0K to disk. Dec 2 03:55:15 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 03:55:15 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 03:55:15 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 03:55:15 localhost systemd[1]: user-1003.slice: Consumed 7min 5.768s CPU time. Dec 2 03:55:15 localhost podman[99057]: 2025-12-02 08:55:15.119842435 +0000 UTC m=+0.103940590 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:55:15 localhost podman[99057]: 2025-12-02 08:55:15.159595881 +0000 UTC m=+0.143694056 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:55:15 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:55:15 localhost podman[99055]: 2025-12-02 08:55:15.1664234 +0000 UTC m=+0.150549836 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:55:15 localhost podman[99056]: 2025-12-02 08:55:15.238918207 +0000 UTC m=+0.222601699 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:55:15 localhost podman[99055]: 2025-12-02 08:55:15.246373025 +0000 UTC m=+0.230499461 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 2 03:55:15 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:55:15 localhost podman[99056]: 2025-12-02 08:55:15.295830597 +0000 UTC m=+0.279514079 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:55:15 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:55:18 localhost podman[99128]: 2025-12-02 08:55:18.083752391 +0000 UTC m=+0.085117464 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step5) Dec 2 03:55:18 localhost podman[99128]: 2025-12-02 08:55:18.119941988 +0000 UTC m=+0.121307061 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute) Dec 2 03:55:18 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:55:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:55:19 localhost podman[99154]: 2025-12-02 08:55:19.105559302 +0000 UTC m=+0.112672587 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public) Dec 2 03:55:19 localhost podman[99154]: 2025-12-02 08:55:19.477829097 +0000 UTC m=+0.484942402 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4) Dec 2 03:55:19 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:55:21 localhost podman[99177]: 2025-12-02 08:55:21.075592003 +0000 UTC m=+0.081001757 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vcs-type=git) Dec 2 03:55:21 localhost podman[99177]: 2025-12-02 08:55:21.11568102 +0000 UTC m=+0.121090754 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:55:21 localhost podman[99177]: unhealthy Dec 2 03:55:21 localhost systemd[1]: tmp-crun.BJtz8l.mount: Deactivated successfully. Dec 2 03:55:21 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:21 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:55:21 localhost podman[99178]: 2025-12-02 08:55:21.135593409 +0000 UTC m=+0.138545619 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:55:21 localhost podman[99178]: 2025-12-02 08:55:21.178059377 +0000 UTC m=+0.181011587 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:55:21 localhost podman[99178]: unhealthy Dec 2 03:55:21 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:21 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:55:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:55:26 localhost podman[99215]: 2025-12-02 08:55:26.079276735 +0000 UTC m=+0.082467033 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, build-date=2025-11-18T22:51:28Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12) Dec 2 03:55:26 localhost podman[99215]: 2025-12-02 08:55:26.119199286 +0000 UTC m=+0.122389584 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:55:26 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:55:29 localhost podman[99236]: 2025-12-02 08:55:29.080924956 +0000 UTC m=+0.083084702 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:55:29 localhost podman[99236]: 2025-12-02 08:55:29.094037837 +0000 UTC m=+0.096197593 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-type=git) Dec 2 03:55:29 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:55:33 localhost sshd[99255]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:55:41 localhost podman[99257]: 2025-12-02 08:55:41.07231471 +0000 UTC m=+0.073349555 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:55:41 localhost podman[99257]: 2025-12-02 08:55:41.259823454 +0000 UTC m=+0.260858239 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:55:41 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:55:46 localhost podman[99286]: 2025-12-02 08:55:46.072158253 +0000 UTC m=+0.075098097 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:55:46 localhost podman[99286]: 2025-12-02 08:55:46.082114118 +0000 UTC m=+0.085054042 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:55:46 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:55:46 localhost podman[99287]: 2025-12-02 08:55:46.122758971 +0000 UTC m=+0.126467099 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team) Dec 2 03:55:46 localhost podman[99288]: 2025-12-02 08:55:46.139880624 +0000 UTC m=+0.138592139 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:55:46 localhost podman[99287]: 2025-12-02 08:55:46.149257292 +0000 UTC m=+0.152965440 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:55:46 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:55:46 localhost podman[99288]: 2025-12-02 08:55:46.199865489 +0000 UTC m=+0.198576994 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:55:46 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:55:49 localhost podman[99358]: 2025-12-02 08:55:49.082384107 +0000 UTC m=+0.088166698 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=nova_compute, release=1761123044, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com) Dec 2 03:55:49 localhost podman[99358]: 2025-12-02 08:55:49.117804741 +0000 UTC m=+0.123587322 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:55:49 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:55:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:55:50 localhost podman[99384]: 2025-12-02 08:55:50.085133695 +0000 UTC m=+0.084324470 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:55:50 localhost podman[99384]: 2025-12-02 08:55:50.447040233 +0000 UTC m=+0.446231018 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container) Dec 2 03:55:50 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:55:52 localhost systemd[1]: tmp-crun.NPBv2q.mount: Deactivated successfully. Dec 2 03:55:52 localhost podman[99421]: 2025-12-02 08:55:52.098183762 +0000 UTC m=+0.103066293 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:55:52 localhost podman[99421]: 2025-12-02 08:55:52.111652254 +0000 UTC m=+0.116534745 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git) Dec 2 03:55:52 localhost podman[99421]: unhealthy Dec 2 03:55:52 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:52 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:55:52 localhost systemd[1]: tmp-crun.nYNY5u.mount: Deactivated successfully. Dec 2 03:55:52 localhost podman[99422]: 2025-12-02 08:55:52.209700792 +0000 UTC m=+0.207344842 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:55:52 localhost podman[99422]: 2025-12-02 08:55:52.229858559 +0000 UTC m=+0.227502609 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4) Dec 2 03:55:52 localhost podman[99422]: unhealthy Dec 2 03:55:52 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:52 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:55:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:55:57 localhost podman[99525]: 2025-12-02 08:55:57.087155903 +0000 UTC m=+0.092206112 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, container_name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:55:57 localhost podman[99525]: 2025-12-02 08:55:57.125389742 +0000 UTC m=+0.130439921 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-collectd) Dec 2 03:55:57 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:56:00 localhost systemd[1]: tmp-crun.tmfG0F.mount: Deactivated successfully. Dec 2 03:56:00 localhost podman[99545]: 2025-12-02 08:56:00.08569216 +0000 UTC m=+0.086747344 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12) Dec 2 03:56:00 localhost podman[99545]: 2025-12-02 08:56:00.09749222 +0000 UTC m=+0.098547354 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 2 03:56:00 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:56:12 localhost podman[99566]: 2025-12-02 08:56:12.094348901 +0000 UTC m=+0.069246338 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:56:12 localhost podman[99566]: 2025-12-02 08:56:12.286607222 +0000 UTC m=+0.261504659 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:56:12 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:56:17 localhost systemd[1]: tmp-crun.YosS0P.mount: Deactivated successfully. Dec 2 03:56:17 localhost podman[99595]: 2025-12-02 08:56:17.087946825 +0000 UTC m=+0.083158485 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Dec 2 03:56:17 localhost systemd[1]: tmp-crun.qQBOh8.mount: Deactivated successfully. Dec 2 03:56:17 localhost podman[99594]: 2025-12-02 08:56:17.131833417 +0000 UTC m=+0.129458541 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 2 03:56:17 localhost podman[99595]: 2025-12-02 08:56:17.142569625 +0000 UTC m=+0.137781325 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 2 03:56:17 localhost podman[99594]: 2025-12-02 08:56:17.14958473 +0000 UTC m=+0.147209874 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:56:17 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:56:17 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:56:17 localhost podman[99596]: 2025-12-02 08:56:17.197183266 +0000 UTC m=+0.186919938 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Dec 2 03:56:17 localhost podman[99596]: 2025-12-02 08:56:17.221913492 +0000 UTC m=+0.211650234 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:56:17 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:56:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:56:20 localhost recover_tripleo_nova_virtqemud[99667]: 61907 Dec 2 03:56:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:56:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:56:20 localhost systemd[1]: tmp-crun.LSn8al.mount: Deactivated successfully. Dec 2 03:56:20 localhost podman[99665]: 2025-12-02 08:56:20.093517897 +0000 UTC m=+0.094808031 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:56:20 localhost podman[99665]: 2025-12-02 08:56:20.120534622 +0000 UTC m=+0.121824786 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container) Dec 2 03:56:20 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:56:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:56:21 localhost podman[99693]: 2025-12-02 08:56:21.08042856 +0000 UTC m=+0.083275618 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:56:21 localhost podman[99693]: 2025-12-02 08:56:21.461863016 +0000 UTC m=+0.464710054 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:56:21 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:56:23 localhost podman[99717]: 2025-12-02 08:56:23.093936351 +0000 UTC m=+0.091826230 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 2 03:56:23 localhost podman[99716]: 2025-12-02 08:56:23.143835447 +0000 UTC m=+0.143224092 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:56:23 localhost podman[99716]: 2025-12-02 08:56:23.161352953 +0000 UTC m=+0.160741628 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 2 03:56:23 localhost podman[99717]: 2025-12-02 08:56:23.163577361 +0000 UTC m=+0.161467220 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4) Dec 2 03:56:23 localhost podman[99717]: unhealthy Dec 2 03:56:23 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:23 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:56:23 localhost podman[99716]: unhealthy Dec 2 03:56:23 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:23 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:56:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:56:28 localhost podman[99756]: 2025-12-02 08:56:28.083231171 +0000 UTC m=+0.084887537 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, release=1761123044) Dec 2 03:56:28 localhost podman[99756]: 2025-12-02 08:56:28.095856587 +0000 UTC m=+0.097512923 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:56:28 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:56:31 localhost podman[99776]: 2025-12-02 08:56:31.073511465 +0000 UTC m=+0.077315485 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:56:31 localhost podman[99776]: 2025-12-02 08:56:31.086911025 +0000 UTC m=+0.090715085 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, distribution-scope=public, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:56:31 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:56:35 localhost sshd[99795]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:56:43 localhost podman[99797]: 2025-12-02 08:56:43.085262509 +0000 UTC m=+0.084931998 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:56:43 localhost podman[99797]: 2025-12-02 08:56:43.27951164 +0000 UTC m=+0.279181099 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:56:43 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:56:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:56:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:56:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:56:48 localhost systemd[1]: tmp-crun.yYJw1x.mount: Deactivated successfully. Dec 2 03:56:48 localhost podman[99825]: 2025-12-02 08:56:48.102278309 +0000 UTC m=+0.104878459 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Dec 2 03:56:48 localhost podman[99825]: 2025-12-02 08:56:48.143105667 +0000 UTC m=+0.145705787 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, release=1761123044, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:56:48 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:56:48 localhost podman[99827]: 2025-12-02 08:56:48.149818123 +0000 UTC m=+0.144526532 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:56:48 localhost podman[99826]: 2025-12-02 08:56:48.255543317 +0000 UTC m=+0.255289470 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:56:48 localhost podman[99827]: 2025-12-02 08:56:48.284922955 +0000 UTC m=+0.279631334 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:56:48 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:56:48 localhost podman[99826]: 2025-12-02 08:56:48.315372876 +0000 UTC m=+0.315119039 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:56:48 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:56:51 localhost podman[99895]: 2025-12-02 08:56:51.101858558 +0000 UTC m=+0.102435294 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:56:51 localhost podman[99895]: 2025-12-02 08:56:51.151011071 +0000 UTC m=+0.151587737 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:56:51 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:56:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:56:52 localhost podman[99922]: 2025-12-02 08:56:52.078555109 +0000 UTC m=+0.083409442 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container) Dec 2 03:56:52 localhost podman[99922]: 2025-12-02 08:56:52.449994259 +0000 UTC m=+0.454848522 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4) Dec 2 03:56:52 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:56:53 localhost podman[99959]: 2025-12-02 08:56:53.760234941 +0000 UTC m=+0.094912924 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller) Dec 2 03:56:53 localhost podman[99958]: 2025-12-02 08:56:53.810635443 +0000 UTC m=+0.147496792 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, architecture=x86_64) Dec 2 03:56:53 localhost podman[99958]: 2025-12-02 08:56:53.8269206 +0000 UTC m=+0.163781939 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 03:56:53 localhost podman[99959]: 2025-12-02 08:56:53.83309807 +0000 UTC m=+0.167776063 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:56:53 localhost podman[99959]: unhealthy Dec 2 03:56:53 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:53 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:56:53 localhost podman[99958]: unhealthy Dec 2 03:56:53 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:53 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:56:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:56:59 localhost podman[100058]: 2025-12-02 08:56:59.066994213 +0000 UTC m=+0.075378226 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:56:59 localhost podman[100058]: 2025-12-02 08:56:59.077177445 +0000 UTC m=+0.085561488 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:56:59 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:57:02 localhost systemd[1]: tmp-crun.uiOAwQ.mount: Deactivated successfully. Dec 2 03:57:02 localhost podman[100077]: 2025-12-02 08:57:02.08124545 +0000 UTC m=+0.087032433 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:57:02 localhost podman[100077]: 2025-12-02 08:57:02.112119574 +0000 UTC m=+0.117906517 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:57:02 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:57:14 localhost podman[100094]: 2025-12-02 08:57:14.089855609 +0000 UTC m=+0.095113030 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, version=17.1.12, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd) Dec 2 03:57:14 localhost podman[100094]: 2025-12-02 08:57:14.288890536 +0000 UTC m=+0.294147947 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:57:14 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:57:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:57:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:57:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:57:19 localhost podman[100125]: 2025-12-02 08:57:19.072046433 +0000 UTC m=+0.068238697 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:57:19 localhost podman[100123]: 2025-12-02 08:57:19.140191887 +0000 UTC m=+0.137036432 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:57:19 localhost podman[100123]: 2025-12-02 08:57:19.1530151 +0000 UTC m=+0.149859655 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 2 03:57:19 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:57:19 localhost podman[100124]: 2025-12-02 08:57:19.24425446 +0000 UTC m=+0.240444914 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:57:19 localhost podman[100125]: 2025-12-02 08:57:19.257805124 +0000 UTC m=+0.253997408 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:57:19 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:57:19 localhost podman[100124]: 2025-12-02 08:57:19.302955935 +0000 UTC m=+0.299146419 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_compute, url=https://www.redhat.com, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:57:19 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:57:22 localhost systemd[1]: tmp-crun.uDols8.mount: Deactivated successfully. Dec 2 03:57:22 localhost podman[100197]: 2025-12-02 08:57:22.068869267 +0000 UTC m=+0.075335135 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:57:22 localhost podman[100197]: 2025-12-02 08:57:22.126089767 +0000 UTC m=+0.132555635 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:57:22 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:57:23 localhost podman[100223]: 2025-12-02 08:57:23.088539752 +0000 UTC m=+0.092694995 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Dec 2 03:57:23 localhost podman[100223]: 2025-12-02 08:57:23.450844223 +0000 UTC m=+0.454999556 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:57:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:57:24 localhost podman[100246]: 2025-12-02 08:57:24.093079345 +0000 UTC m=+0.091172079 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:57:24 localhost podman[100246]: 2025-12-02 08:57:24.128906671 +0000 UTC m=+0.126999195 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:57:24 localhost podman[100246]: unhealthy Dec 2 03:57:24 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:24 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:57:24 localhost podman[100247]: 2025-12-02 08:57:24.152530884 +0000 UTC m=+0.148639807 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044) Dec 2 03:57:24 localhost podman[100247]: 2025-12-02 08:57:24.189439442 +0000 UTC m=+0.185548355 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller) Dec 2 03:57:24 localhost podman[100247]: unhealthy Dec 2 03:57:24 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:24 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:57:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:57:30 localhost podman[100285]: 2025-12-02 08:57:30.081989788 +0000 UTC m=+0.083892267 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 2 03:57:30 localhost podman[100285]: 2025-12-02 08:57:30.119957409 +0000 UTC m=+0.121859888 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public) Dec 2 03:57:30 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:57:33 localhost podman[100305]: 2025-12-02 08:57:33.077665108 +0000 UTC m=+0.079648057 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:57:33 localhost podman[100305]: 2025-12-02 08:57:33.112137882 +0000 UTC m=+0.114120851 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Dec 2 03:57:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:57:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:57:45 localhost podman[100323]: 2025-12-02 08:57:45.079573472 +0000 UTC m=+0.082506035 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:57:45 localhost podman[100323]: 2025-12-02 08:57:45.286954424 +0000 UTC m=+0.289886997 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:57:45 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:57:46 localhost sshd[100353]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:57:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:57:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:57:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:57:50 localhost podman[100355]: 2025-12-02 08:57:50.085171652 +0000 UTC m=+0.087885169 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:57:50 localhost podman[100355]: 2025-12-02 08:57:50.120896144 +0000 UTC m=+0.123609641 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:57:50 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:57:50 localhost podman[100357]: 2025-12-02 08:57:50.122313848 +0000 UTC m=+0.124281411 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 2 03:57:50 localhost systemd[1]: tmp-crun.g3EKdO.mount: Deactivated successfully. Dec 2 03:57:50 localhost podman[100356]: 2025-12-02 08:57:50.191711331 +0000 UTC m=+0.193118137 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:57:50 localhost podman[100357]: 2025-12-02 08:57:50.205903855 +0000 UTC m=+0.207871378 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:12:45Z, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:57:50 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:57:50 localhost podman[100356]: 2025-12-02 08:57:50.241912616 +0000 UTC m=+0.243319402 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 03:57:50 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:57:53 localhost systemd[1]: tmp-crun.MxR5KR.mount: Deactivated successfully. Dec 2 03:57:53 localhost podman[100429]: 2025-12-02 08:57:53.073224958 +0000 UTC m=+0.076930544 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 2 03:57:53 localhost podman[100429]: 2025-12-02 08:57:53.125512627 +0000 UTC m=+0.129218223 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, release=1761123044) Dec 2 03:57:53 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:57:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:57:54 localhost systemd[1]: tmp-crun.4wSFsu.mount: Deactivated successfully. Dec 2 03:57:54 localhost podman[100455]: 2025-12-02 08:57:54.07611501 +0000 UTC m=+0.082745131 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true) Dec 2 03:57:54 localhost podman[100455]: 2025-12-02 08:57:54.471034278 +0000 UTC m=+0.477664369 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:57:54 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:57:54 localhost podman[100478]: 2025-12-02 08:57:54.580013901 +0000 UTC m=+0.084832665 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:57:54 localhost podman[100479]: 2025-12-02 08:57:54.642553254 +0000 UTC m=+0.141943432 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:57:54 localhost podman[100478]: 2025-12-02 08:57:54.668425336 +0000 UTC m=+0.173244110 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z) Dec 2 03:57:54 localhost podman[100478]: unhealthy Dec 2 03:57:54 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:54 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:57:54 localhost podman[100479]: 2025-12-02 08:57:54.684569779 +0000 UTC m=+0.183959947 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:57:54 localhost podman[100479]: unhealthy Dec 2 03:57:54 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:54 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:58:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:58:01 localhost podman[100595]: 2025-12-02 08:58:01.097324166 +0000 UTC m=+0.088404255 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:58:01 localhost podman[100595]: 2025-12-02 08:58:01.109299322 +0000 UTC m=+0.100379431 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:01 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:58:04 localhost podman[100613]: 2025-12-02 08:58:04.08271377 +0000 UTC m=+0.085913338 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public) Dec 2 03:58:04 localhost podman[100613]: 2025-12-02 08:58:04.123213349 +0000 UTC m=+0.126412907 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z) Dec 2 03:58:04 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:58:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:58:16 localhost recover_tripleo_nova_virtqemud[100640]: 61907 Dec 2 03:58:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:58:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:58:16 localhost podman[100633]: 2025-12-02 08:58:16.067779629 +0000 UTC m=+0.078662778 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4) Dec 2 03:58:16 localhost podman[100633]: 2025-12-02 08:58:16.290097268 +0000 UTC m=+0.300980327 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 2 03:58:16 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:58:21 localhost systemd[1]: tmp-crun.Em19E7.mount: Deactivated successfully. Dec 2 03:58:21 localhost podman[100665]: 2025-12-02 08:58:21.09832335 +0000 UTC m=+0.100064310 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:58:21 localhost podman[100666]: 2025-12-02 08:58:21.143362068 +0000 UTC m=+0.140325063 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:58:21 localhost podman[100665]: 2025-12-02 08:58:21.157809899 +0000 UTC m=+0.159550799 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:58:21 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:58:21 localhost podman[100666]: 2025-12-02 08:58:21.182396631 +0000 UTC m=+0.179359656 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:58:21 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:58:21 localhost podman[100664]: 2025-12-02 08:58:21.24804157 +0000 UTC m=+0.250752411 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:58:21 localhost podman[100664]: 2025-12-02 08:58:21.261988986 +0000 UTC m=+0.264699797 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:58:21 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:58:24 localhost podman[100737]: 2025-12-02 08:58:24.07798787 +0000 UTC m=+0.080730970 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, release=1761123044) Dec 2 03:58:24 localhost podman[100737]: 2025-12-02 08:58:24.10907603 +0000 UTC m=+0.111819120 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:58:24 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:58:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:58:25 localhost podman[100763]: 2025-12-02 08:58:25.083255264 +0000 UTC m=+0.088052864 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:25 localhost systemd[1]: tmp-crun.X5en6K.mount: Deactivated successfully. Dec 2 03:58:25 localhost podman[100764]: 2025-12-02 08:58:25.184277764 +0000 UTC m=+0.185969699 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:58:25 localhost podman[100763]: 2025-12-02 08:58:25.199834339 +0000 UTC m=+0.204631969 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:58:25 localhost podman[100763]: unhealthy Dec 2 03:58:25 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:25 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:58:25 localhost podman[100764]: 2025-12-02 08:58:25.223751631 +0000 UTC m=+0.225443536 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:58:25 localhost podman[100764]: unhealthy Dec 2 03:58:25 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:25 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:58:25 localhost podman[100765]: 2025-12-02 08:58:25.281655792 +0000 UTC m=+0.279545350 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc.) Dec 2 03:58:25 localhost podman[100765]: 2025-12-02 08:58:25.634218754 +0000 UTC m=+0.632108272 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:58:25 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:58:32 localhost podman[100823]: 2025-12-02 08:58:32.08451028 +0000 UTC m=+0.091161189 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:58:32 localhost podman[100823]: 2025-12-02 08:58:32.101998475 +0000 UTC m=+0.108649374 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, release=1761123044, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 2 03:58:32 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:58:35 localhost podman[100843]: 2025-12-02 08:58:35.081523811 +0000 UTC m=+0.084598599 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044) Dec 2 03:58:35 localhost podman[100843]: 2025-12-02 08:58:35.120936786 +0000 UTC m=+0.124011614 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Dec 2 03:58:35 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:58:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:58:47 localhost podman[100863]: 2025-12-02 08:58:47.081096225 +0000 UTC m=+0.084294609 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:58:47 localhost podman[100863]: 2025-12-02 08:58:47.300391811 +0000 UTC m=+0.303590185 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd) Dec 2 03:58:47 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:58:52 localhost podman[100893]: 2025-12-02 08:58:52.06660512 +0000 UTC m=+0.067980410 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 2 03:58:52 localhost podman[100893]: 2025-12-02 08:58:52.091614755 +0000 UTC m=+0.092990085 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 2 03:58:52 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:58:52 localhost podman[100894]: 2025-12-02 08:58:52.183328379 +0000 UTC m=+0.180068178 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:58:52 localhost podman[100894]: 2025-12-02 08:58:52.235089203 +0000 UTC m=+0.231828952 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com) Dec 2 03:58:52 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:58:52 localhost podman[100892]: 2025-12-02 08:58:52.239985233 +0000 UTC m=+0.241366213 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Dec 2 03:58:52 localhost podman[100892]: 2025-12-02 08:58:52.320946039 +0000 UTC m=+0.322327049 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 2 03:58:52 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:58:55 localhost podman[100965]: 2025-12-02 08:58:55.060129243 +0000 UTC m=+0.065277397 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 2 03:58:55 localhost podman[100965]: 2025-12-02 08:58:55.078254708 +0000 UTC m=+0.083402892 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, tcib_managed=true, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:55 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:58:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:58:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:58:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:58:56 localhost systemd[1]: tmp-crun.HE9pcL.mount: Deactivated successfully. Dec 2 03:58:56 localhost podman[100995]: 2025-12-02 08:58:56.079307613 +0000 UTC m=+0.072072205 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:58:56 localhost podman[100993]: 2025-12-02 08:58:56.130655124 +0000 UTC m=+0.129143861 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:58:56 localhost podman[100993]: 2025-12-02 08:58:56.169182762 +0000 UTC m=+0.167671529 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 2 03:58:56 localhost podman[100993]: unhealthy Dec 2 03:58:56 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:56 localhost podman[100994]: 2025-12-02 08:58:56.18873575 +0000 UTC m=+0.181622096 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:58:56 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:58:56 localhost podman[100994]: 2025-12-02 08:58:56.23186736 +0000 UTC m=+0.224753746 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 2 03:58:56 localhost podman[100994]: unhealthy Dec 2 03:58:56 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:56 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:58:56 localhost podman[100995]: 2025-12-02 08:58:56.47484242 +0000 UTC m=+0.467607052 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 2 03:58:56 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:58:57 localhost systemd[1]: tmp-crun.zxQsi6.mount: Deactivated successfully. Dec 2 03:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:59:03 localhost podman[101182]: 2025-12-02 08:59:03.106181252 +0000 UTC m=+0.101299800 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:59:03 localhost podman[101182]: 2025-12-02 08:59:03.120808209 +0000 UTC m=+0.115926717 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:59:03 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:59:06 localhost podman[101202]: 2025-12-02 08:59:06.077535607 +0000 UTC m=+0.082528935 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:59:06 localhost podman[101202]: 2025-12-02 08:59:06.086611794 +0000 UTC m=+0.091605122 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:59:06 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:59:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:59:18 localhost systemd[1]: tmp-crun.LFzGRP.mount: Deactivated successfully. Dec 2 03:59:18 localhost podman[101221]: 2025-12-02 08:59:18.074255454 +0000 UTC m=+0.077765730 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 2 03:59:18 localhost podman[101221]: 2025-12-02 08:59:18.271344631 +0000 UTC m=+0.274854897 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4) Dec 2 03:59:18 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:59:23 localhost podman[101251]: 2025-12-02 08:59:23.088054954 +0000 UTC m=+0.085007900 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:23 localhost podman[101251]: 2025-12-02 08:59:23.120901179 +0000 UTC m=+0.117854105 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 2 03:59:23 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:59:23 localhost podman[101252]: 2025-12-02 08:59:23.142525531 +0000 UTC m=+0.137688302 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, architecture=x86_64) Dec 2 03:59:23 localhost podman[101252]: 2025-12-02 08:59:23.176827999 +0000 UTC m=+0.171990770 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:59:23 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:59:23 localhost podman[101250]: 2025-12-02 08:59:23.194554961 +0000 UTC m=+0.193247301 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:59:23 localhost podman[101250]: 2025-12-02 08:59:23.204719812 +0000 UTC m=+0.203412182 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public) Dec 2 03:59:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:59:23 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:59:23 localhost recover_tripleo_nova_virtqemud[101324]: 61907 Dec 2 03:59:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:59:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:59:26 localhost podman[101325]: 2025-12-02 08:59:26.07835828 +0000 UTC m=+0.078243484 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git) Dec 2 03:59:26 localhost podman[101325]: 2025-12-02 08:59:26.108857213 +0000 UTC m=+0.108742447 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:26 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:59:27 localhost systemd[1]: tmp-crun.TG8QmJ.mount: Deactivated successfully. Dec 2 03:59:27 localhost podman[101351]: 2025-12-02 08:59:27.115334265 +0000 UTC m=+0.078606015 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:59:27 localhost systemd[1]: tmp-crun.9yEXs1.mount: Deactivated successfully. Dec 2 03:59:27 localhost podman[101352]: 2025-12-02 08:59:27.134585024 +0000 UTC m=+0.088456237 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 2 03:59:27 localhost podman[101350]: 2025-12-02 08:59:27.176052323 +0000 UTC m=+0.136980921 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:59:27 localhost podman[101351]: 2025-12-02 08:59:27.208199175 +0000 UTC m=+0.171470885 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:59:27 localhost podman[101351]: unhealthy Dec 2 03:59:27 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:27 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:59:27 localhost podman[101350]: 2025-12-02 08:59:27.282044274 +0000 UTC m=+0.242972872 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:59:27 localhost podman[101350]: unhealthy Dec 2 03:59:27 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:27 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:59:27 localhost podman[101352]: 2025-12-02 08:59:27.516252277 +0000 UTC m=+0.470123520 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:59:27 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 03:59:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 03:59:34 localhost podman[101407]: 2025-12-02 08:59:34.079806526 +0000 UTC m=+0.086989892 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:59:34 localhost podman[101407]: 2025-12-02 08:59:34.089864114 +0000 UTC m=+0.097047460 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:59:34 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 03:59:37 localhost systemd[1]: tmp-crun.J4XKVE.mount: Deactivated successfully. Dec 2 03:59:37 localhost podman[101427]: 2025-12-02 08:59:37.058936709 +0000 UTC m=+0.067806865 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:59:37 localhost podman[101427]: 2025-12-02 08:59:37.093520347 +0000 UTC m=+0.102390513 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:37 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 03:59:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 03:59:49 localhost podman[101447]: 2025-12-02 08:59:49.098140764 +0000 UTC m=+0.101132454 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:59:49 localhost podman[101447]: 2025-12-02 08:59:49.291121606 +0000 UTC m=+0.294113376 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:59:49 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 03:59:53 localhost sshd[101476]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 03:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 03:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 03:59:53 localhost systemd[1]: tmp-crun.PzCFTc.mount: Deactivated successfully. Dec 2 03:59:53 localhost podman[101480]: 2025-12-02 08:59:53.705277557 +0000 UTC m=+0.138023833 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:59:53 localhost podman[101478]: 2025-12-02 08:59:53.669387149 +0000 UTC m=+0.106181418 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:59:53 localhost podman[101480]: 2025-12-02 08:59:53.734799659 +0000 UTC m=+0.167545885 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi) Dec 2 03:59:53 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 03:59:53 localhost podman[101478]: 2025-12-02 08:59:53.754522123 +0000 UTC m=+0.191316422 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:59:53 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 03:59:53 localhost podman[101479]: 2025-12-02 08:59:53.821103529 +0000 UTC m=+0.257896268 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:59:53 localhost podman[101479]: 2025-12-02 08:59:53.853943744 +0000 UTC m=+0.290736473 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:53 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 03:59:57 localhost systemd[1]: tmp-crun.xOwr5T.mount: Deactivated successfully. Dec 2 03:59:57 localhost podman[101555]: 2025-12-02 08:59:57.070203638 +0000 UTC m=+0.075649985 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:59:57 localhost podman[101555]: 2025-12-02 08:59:57.118800284 +0000 UTC m=+0.124246601 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:59:57 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 03:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 03:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 03:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 03:59:58 localhost systemd[1]: tmp-crun.m2cSxp.mount: Deactivated successfully. Dec 2 03:59:58 localhost podman[101582]: 2025-12-02 08:59:58.07634734 +0000 UTC m=+0.079150052 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_migration_target, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:59:58 localhost podman[101581]: 2025-12-02 08:59:58.133575159 +0000 UTC m=+0.136664290 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:59:58 localhost podman[101581]: 2025-12-02 08:59:58.17282224 +0000 UTC m=+0.175911341 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller) Dec 2 03:59:58 localhost podman[101581]: unhealthy Dec 2 03:59:58 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:58 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 03:59:58 localhost podman[101580]: 2025-12-02 08:59:58.190352047 +0000 UTC m=+0.193673755 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:59:58 localhost podman[101580]: 2025-12-02 08:59:58.198801475 +0000 UTC m=+0.202123113 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:59:58 localhost podman[101580]: unhealthy Dec 2 03:59:58 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:58 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 03:59:58 localhost podman[101582]: 2025-12-02 08:59:58.479768878 +0000 UTC m=+0.482571600 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Dec 2 03:59:58 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:00:05 localhost podman[101723]: 2025-12-02 09:00:05.086049522 +0000 UTC m=+0.092173321 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 2 04:00:05 localhost podman[101723]: 2025-12-02 09:00:05.121489276 +0000 UTC m=+0.127613045 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3) Dec 2 04:00:05 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:00:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:00:08 localhost podman[101743]: 2025-12-02 09:00:08.090373966 +0000 UTC m=+0.088606752 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1) Dec 2 04:00:08 localhost podman[101743]: 2025-12-02 09:00:08.12387003 +0000 UTC m=+0.122102776 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid) Dec 2 04:00:08 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:00:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:00:20 localhost podman[101765]: 2025-12-02 09:00:20.080525469 +0000 UTC m=+0.086544138 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr) Dec 2 04:00:20 localhost podman[101765]: 2025-12-02 09:00:20.30029142 +0000 UTC m=+0.306310049 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:00:20 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:00:24 localhost systemd[1]: tmp-crun.jSjruy.mount: Deactivated successfully. Dec 2 04:00:24 localhost podman[101796]: 2025-12-02 09:00:24.083841136 +0000 UTC m=+0.069901179 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:00:24 localhost podman[101795]: 2025-12-02 09:00:24.13400711 +0000 UTC m=+0.119920369 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public) Dec 2 04:00:24 localhost podman[101796]: 2025-12-02 09:00:24.138969761 +0000 UTC m=+0.125029814 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true) Dec 2 04:00:24 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:00:24 localhost systemd[1]: tmp-crun.gwai7X.mount: Deactivated successfully. Dec 2 04:00:24 localhost podman[101794]: 2025-12-02 09:00:24.188420554 +0000 UTC m=+0.181120291 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 2 04:00:24 localhost podman[101794]: 2025-12-02 09:00:24.198723249 +0000 UTC m=+0.191422986 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, config_id=tripleo_step4) Dec 2 04:00:24 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:00:24 localhost podman[101795]: 2025-12-02 09:00:24.210176069 +0000 UTC m=+0.196089388 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 04:00:24 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:00:28 localhost systemd[1]: tmp-crun.2Gxrq9.mount: Deactivated successfully. Dec 2 04:00:28 localhost podman[101865]: 2025-12-02 09:00:28.080000382 +0000 UTC m=+0.084618159 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044) Dec 2 04:00:28 localhost podman[101865]: 2025-12-02 09:00:28.11294011 +0000 UTC m=+0.117557867 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step5, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:00:28 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:00:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:00:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:00:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:00:29 localhost podman[101891]: 2025-12-02 09:00:29.073373394 +0000 UTC m=+0.072707715 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, tcib_managed=true) Dec 2 04:00:29 localhost podman[101891]: 2025-12-02 09:00:29.087962819 +0000 UTC m=+0.087297130 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:00:29 localhost podman[101891]: unhealthy Dec 2 04:00:29 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:29 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:00:29 localhost systemd[1]: tmp-crun.cBNQsg.mount: Deactivated successfully. Dec 2 04:00:29 localhost podman[101892]: 2025-12-02 09:00:29.17689737 +0000 UTC m=+0.173794877 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1) Dec 2 04:00:29 localhost podman[101890]: 2025-12-02 09:00:29.151546694 +0000 UTC m=+0.153605989 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z) Dec 2 04:00:29 localhost podman[101890]: 2025-12-02 09:00:29.231913012 +0000 UTC m=+0.233972317 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044) Dec 2 04:00:29 localhost podman[101890]: unhealthy Dec 2 04:00:29 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:29 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:00:29 localhost podman[101892]: 2025-12-02 09:00:29.584852396 +0000 UTC m=+0.581749863 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:00:29 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:00:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:00:35 localhost recover_tripleo_nova_virtqemud[101952]: 61907 Dec 2 04:00:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:00:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:00:36 localhost podman[101953]: 2025-12-02 09:00:36.082204 +0000 UTC m=+0.084059082 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-collectd-container, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:00:36 localhost podman[101953]: 2025-12-02 09:00:36.096988532 +0000 UTC m=+0.098843614 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:00:36 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:00:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:00:39 localhost podman[101973]: 2025-12-02 09:00:39.087019827 +0000 UTC m=+0.084247017 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12) Dec 2 04:00:39 localhost podman[101973]: 2025-12-02 09:00:39.103153691 +0000 UTC m=+0.100380841 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:00:39 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:00:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:00:51 localhost podman[101992]: 2025-12-02 09:00:51.07409633 +0000 UTC m=+0.080330447 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 2 04:00:51 localhost podman[101992]: 2025-12-02 09:00:51.292779468 +0000 UTC m=+0.299013565 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:00:51 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:00:55 localhost podman[102023]: 2025-12-02 09:00:55.07100103 +0000 UTC m=+0.065942147 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 04:00:55 localhost podman[102023]: 2025-12-02 09:00:55.124778725 +0000 UTC m=+0.119719812 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 04:00:55 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:00:55 localhost podman[102022]: 2025-12-02 09:00:55.141762464 +0000 UTC m=+0.141082345 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12) Dec 2 04:00:55 localhost podman[102021]: 2025-12-02 09:00:55.190540816 +0000 UTC m=+0.192935721 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:00:55 localhost podman[102021]: 2025-12-02 09:00:55.197904911 +0000 UTC m=+0.200299746 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container) Dec 2 04:00:55 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:00:55 localhost podman[102022]: 2025-12-02 09:00:55.221091601 +0000 UTC m=+0.220411412 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git) Dec 2 04:00:55 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:00:59 localhost podman[102093]: 2025-12-02 09:00:59.066579799 +0000 UTC m=+0.067366391 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1) Dec 2 04:00:59 localhost podman[102093]: 2025-12-02 09:00:59.120963993 +0000 UTC m=+0.121750505 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 2 04:00:59 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:00:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:00:59 localhost podman[102120]: 2025-12-02 09:00:59.228707798 +0000 UTC m=+0.074298033 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible) Dec 2 04:00:59 localhost podman[102120]: 2025-12-02 09:00:59.244966785 +0000 UTC m=+0.090556960 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4) Dec 2 04:00:59 localhost podman[102120]: unhealthy Dec 2 04:00:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:00:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:00:59 localhost podman[102141]: 2025-12-02 09:00:59.35106214 +0000 UTC m=+0.068281859 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vcs-type=git) Dec 2 04:00:59 localhost podman[102141]: 2025-12-02 09:00:59.36904796 +0000 UTC m=+0.086267719 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:00:59 localhost podman[102141]: unhealthy Dec 2 04:00:59 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:59 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:00:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:01:00 localhost podman[102160]: 2025-12-02 09:01:00.080148058 +0000 UTC m=+0.087199488 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:01:00 localhost podman[102160]: 2025-12-02 09:01:00.468037002 +0000 UTC m=+0.475088362 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:01:00 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:01:04 localhost podman[102310]: 2025-12-02 09:01:04.230392408 +0000 UTC m=+0.098482793 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:01:04 localhost podman[102310]: 2025-12-02 09:01:04.338748821 +0000 UTC m=+0.206839236 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:01:07 localhost systemd[1]: tmp-crun.mVBnf1.mount: Deactivated successfully. Dec 2 04:01:07 localhost podman[102454]: 2025-12-02 09:01:07.07740938 +0000 UTC m=+0.081327509 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:01:07 localhost podman[102454]: 2025-12-02 09:01:07.087562361 +0000 UTC m=+0.091480540 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:01:07 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:01:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:01:10 localhost podman[102472]: 2025-12-02 09:01:10.073744869 +0000 UTC m=+0.078492661 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Dec 2 04:01:10 localhost podman[102472]: 2025-12-02 09:01:10.112086651 +0000 UTC m=+0.116834453 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, container_name=iscsid, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:01:10 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:01:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:01:22 localhost podman[102492]: 2025-12-02 09:01:22.104130243 +0000 UTC m=+0.111220363 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:01:22 localhost podman[102492]: 2025-12-02 09:01:22.298434226 +0000 UTC m=+0.305524426 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:01:22 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:01:26 localhost systemd[1]: tmp-crun.SwLYbQ.mount: Deactivated successfully. Dec 2 04:01:26 localhost podman[102522]: 2025-12-02 09:01:26.063863888 +0000 UTC m=+0.066202545 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=logrotate_crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Dec 2 04:01:26 localhost podman[102523]: 2025-12-02 09:01:26.133320592 +0000 UTC m=+0.133572205 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 2 04:01:26 localhost podman[102522]: 2025-12-02 09:01:26.149978912 +0000 UTC m=+0.152317519 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 04:01:26 localhost podman[102524]: 2025-12-02 09:01:26.097950361 +0000 UTC m=+0.092285643 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:01:26 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:01:26 localhost podman[102523]: 2025-12-02 09:01:26.166530498 +0000 UTC m=+0.166782091 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:01:26 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:01:26 localhost podman[102524]: 2025-12-02 09:01:26.233033402 +0000 UTC m=+0.227368694 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:01:26 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:01:30 localhost podman[102591]: 2025-12-02 09:01:30.080322829 +0000 UTC m=+0.085916368 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_compute) Dec 2 04:01:30 localhost podman[102591]: 2025-12-02 09:01:30.116560557 +0000 UTC m=+0.122154056 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5) Dec 2 04:01:30 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:01:30 localhost podman[102592]: 2025-12-02 09:01:30.130871455 +0000 UTC m=+0.130560064 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-type=git, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:01:30 localhost podman[102592]: 2025-12-02 09:01:30.141040356 +0000 UTC m=+0.140728945 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:01:30 localhost podman[102592]: unhealthy Dec 2 04:01:30 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:30 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:01:30 localhost podman[102593]: 2025-12-02 09:01:30.190129747 +0000 UTC m=+0.188788084 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:01:30 localhost podman[102593]: 2025-12-02 09:01:30.203353032 +0000 UTC m=+0.202011309 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:01:30 localhost podman[102593]: unhealthy Dec 2 04:01:30 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:30 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:01:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:01:31 localhost systemd[1]: tmp-crun.moMLLC.mount: Deactivated successfully. Dec 2 04:01:31 localhost systemd[1]: tmp-crun.wfXBPV.mount: Deactivated successfully. Dec 2 04:01:31 localhost podman[102657]: 2025-12-02 09:01:31.092275858 +0000 UTC m=+0.091418146 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 04:01:31 localhost podman[102657]: 2025-12-02 09:01:31.431476983 +0000 UTC m=+0.430619301 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 2 04:01:31 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:01:38 localhost systemd[1]: tmp-crun.NdebTW.mount: Deactivated successfully. Dec 2 04:01:38 localhost podman[102680]: 2025-12-02 09:01:38.089018947 +0000 UTC m=+0.090897640 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:01:38 localhost podman[102680]: 2025-12-02 09:01:38.12314483 +0000 UTC m=+0.125023593 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 04:01:38 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:01:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:01:41 localhost podman[102701]: 2025-12-02 09:01:41.059738852 +0000 UTC m=+0.064825793 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:01:41 localhost podman[102701]: 2025-12-02 09:01:41.094729863 +0000 UTC m=+0.099816794 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid) Dec 2 04:01:41 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:01:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:01:53 localhost podman[102721]: 2025-12-02 09:01:53.081467243 +0000 UTC m=+0.080921576 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 2 04:01:53 localhost podman[102721]: 2025-12-02 09:01:53.274621121 +0000 UTC m=+0.274075444 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 2 04:01:53 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:01:57 localhost podman[102751]: 2025-12-02 09:01:57.079070367 +0000 UTC m=+0.080751641 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Dec 2 04:01:57 localhost podman[102751]: 2025-12-02 09:01:57.135190783 +0000 UTC m=+0.136872047 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:01:57 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:01:57 localhost podman[102752]: 2025-12-02 09:01:57.190707671 +0000 UTC m=+0.187764373 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Dec 2 04:01:57 localhost podman[102752]: 2025-12-02 09:01:57.220870773 +0000 UTC m=+0.217927515 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:01:57 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:01:57 localhost podman[102750]: 2025-12-02 09:01:57.142205667 +0000 UTC m=+0.146366147 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:01:57 localhost podman[102750]: 2025-12-02 09:01:57.276129813 +0000 UTC m=+0.280290293 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 04:01:57 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:02:01 localhost podman[102824]: 2025-12-02 09:02:01.082187597 +0000 UTC m=+0.076313615 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 04:02:01 localhost podman[102824]: 2025-12-02 09:02:01.097836105 +0000 UTC m=+0.091962033 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:02:01 localhost podman[102824]: unhealthy Dec 2 04:02:01 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:01 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:02:01 localhost systemd[1]: tmp-crun.hxGpi2.mount: Deactivated successfully. Dec 2 04:02:01 localhost podman[102823]: 2025-12-02 09:02:01.135930631 +0000 UTC m=+0.132081381 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:02:01 localhost podman[102823]: 2025-12-02 09:02:01.147787033 +0000 UTC m=+0.143937813 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible) Dec 2 04:02:01 localhost podman[102823]: unhealthy Dec 2 04:02:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:02:01 localhost podman[102822]: 2025-12-02 09:02:01.185551779 +0000 UTC m=+0.186728313 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:02:01 localhost podman[102822]: 2025-12-02 09:02:01.238732435 +0000 UTC m=+0.239908969 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:02:01 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:02:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:02:02 localhost podman[102890]: 2025-12-02 09:02:02.065489931 +0000 UTC m=+0.074066277 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 04:02:02 localhost podman[102890]: 2025-12-02 09:02:02.416252238 +0000 UTC m=+0.424828574 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 2 04:02:02 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:02:05 localhost sshd[102913]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:02:09 localhost podman[102991]: 2025-12-02 09:02:09.065827347 +0000 UTC m=+0.074292023 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:02:09 localhost podman[102991]: 2025-12-02 09:02:09.102964283 +0000 UTC m=+0.111428959 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 04:02:09 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:02:12 localhost podman[103012]: 2025-12-02 09:02:12.08225528 +0000 UTC m=+0.081702459 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Dec 2 04:02:12 localhost podman[103012]: 2025-12-02 09:02:12.11950839 +0000 UTC m=+0.118955539 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64) Dec 2 04:02:12 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:02:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:02:24 localhost podman[103031]: 2025-12-02 09:02:24.075718716 +0000 UTC m=+0.083284228 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1) Dec 2 04:02:24 localhost podman[103031]: 2025-12-02 09:02:24.246070145 +0000 UTC m=+0.253635627 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12) Dec 2 04:02:24 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:02:28 localhost systemd[1]: tmp-crun.7rddvy.mount: Deactivated successfully. Dec 2 04:02:28 localhost podman[103063]: 2025-12-02 09:02:28.06263413 +0000 UTC m=+0.060692167 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 2 04:02:28 localhost podman[103062]: 2025-12-02 09:02:28.075620927 +0000 UTC m=+0.073018914 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:02:28 localhost podman[103063]: 2025-12-02 09:02:28.084438757 +0000 UTC m=+0.082496744 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:02:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:02:28 localhost podman[103062]: 2025-12-02 09:02:28.100711284 +0000 UTC m=+0.098109281 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Dec 2 04:02:28 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:02:28 localhost podman[103061]: 2025-12-02 09:02:28.180637619 +0000 UTC m=+0.182082740 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond) Dec 2 04:02:28 localhost podman[103061]: 2025-12-02 09:02:28.213647249 +0000 UTC m=+0.215092360 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:02:28 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:02:32 localhost podman[103136]: 2025-12-02 09:02:32.094051796 +0000 UTC m=+0.088974932 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:02:32 localhost podman[103136]: 2025-12-02 09:02:32.135906616 +0000 UTC m=+0.130829742 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4) Dec 2 04:02:32 localhost podman[103135]: 2025-12-02 09:02:32.139135975 +0000 UTC m=+0.137947740 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 04:02:32 localhost podman[103136]: unhealthy Dec 2 04:02:32 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:32 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:02:32 localhost podman[103137]: 2025-12-02 09:02:32.187156313 +0000 UTC m=+0.178618354 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:02:32 localhost podman[103137]: 2025-12-02 09:02:32.22855885 +0000 UTC m=+0.220020811 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container) Dec 2 04:02:32 localhost podman[103137]: unhealthy Dec 2 04:02:32 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:32 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:02:32 localhost podman[103135]: 2025-12-02 09:02:32.271238925 +0000 UTC m=+0.270050750 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 04:02:32 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:02:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:02:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:02:33 localhost recover_tripleo_nova_virtqemud[103203]: 61907 Dec 2 04:02:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:02:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:02:33 localhost systemd[1]: tmp-crun.lS1dOn.mount: Deactivated successfully. Dec 2 04:02:33 localhost podman[103195]: 2025-12-02 09:02:33.079268297 +0000 UTC m=+0.084872227 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute) Dec 2 04:02:33 localhost podman[103195]: 2025-12-02 09:02:33.443779446 +0000 UTC m=+0.449383316 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 2 04:02:33 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:02:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:02:40 localhost podman[103220]: 2025-12-02 09:02:40.082922046 +0000 UTC m=+0.085514137 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:02:40 localhost podman[103220]: 2025-12-02 09:02:40.095848951 +0000 UTC m=+0.098441032 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vcs-type=git) Dec 2 04:02:40 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:02:43 localhost podman[103241]: 2025-12-02 09:02:43.071804117 +0000 UTC m=+0.077319736 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Dec 2 04:02:43 localhost podman[103241]: 2025-12-02 09:02:43.078910024 +0000 UTC m=+0.084425703 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:02:43 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:02:49 localhost sshd[103260]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:02:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:02:55 localhost podman[103262]: 2025-12-02 09:02:55.080534327 +0000 UTC m=+0.084724802 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:02:55 localhost podman[103262]: 2025-12-02 09:02:55.297996098 +0000 UTC m=+0.302186573 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044) Dec 2 04:02:55 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:02:59 localhost podman[103294]: 2025-12-02 09:02:59.073207148 +0000 UTC m=+0.071715364 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 04:02:59 localhost podman[103294]: 2025-12-02 09:02:59.12331306 +0000 UTC m=+0.121821496 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1761123044) Dec 2 04:02:59 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:02:59 localhost systemd[1]: tmp-crun.E8qxJQ.mount: Deactivated successfully. Dec 2 04:02:59 localhost podman[103295]: 2025-12-02 09:02:59.134418749 +0000 UTC m=+0.128199901 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:02:59 localhost podman[103293]: 2025-12-02 09:02:59.190528125 +0000 UTC m=+0.188773844 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public) Dec 2 04:02:59 localhost podman[103293]: 2025-12-02 09:02:59.197876271 +0000 UTC m=+0.196122010 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container) Dec 2 04:02:59 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:02:59 localhost podman[103295]: 2025-12-02 09:02:59.219865133 +0000 UTC m=+0.213646355 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:02:59 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:03:03 localhost systemd[1]: tmp-crun.02fzzy.mount: Deactivated successfully. Dec 2 04:03:03 localhost podman[103367]: 2025-12-02 09:03:03.083667192 +0000 UTC m=+0.089227230 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, url=https://www.redhat.com, container_name=nova_compute, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 2 04:03:03 localhost podman[103368]: 2025-12-02 09:03:03.089936714 +0000 UTC m=+0.088368903 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:03:03 localhost podman[103368]: 2025-12-02 09:03:03.111307128 +0000 UTC m=+0.109739267 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 04:03:03 localhost podman[103368]: unhealthy Dec 2 04:03:03 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:03 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:03:03 localhost podman[103367]: 2025-12-02 09:03:03.139190811 +0000 UTC m=+0.144750769 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 2 04:03:03 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:03:03 localhost podman[103369]: 2025-12-02 09:03:03.199510145 +0000 UTC m=+0.194680545 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 2 04:03:03 localhost podman[103369]: 2025-12-02 09:03:03.249011789 +0000 UTC m=+0.244182209 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12) Dec 2 04:03:03 localhost podman[103369]: unhealthy Dec 2 04:03:03 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:03 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:03:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:03:04 localhost podman[103433]: 2025-12-02 09:03:04.056164234 +0000 UTC m=+0.068481325 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute) Dec 2 04:03:04 localhost systemd[1]: tmp-crun.PHYaxV.mount: Deactivated successfully. Dec 2 04:03:04 localhost podman[103433]: 2025-12-02 09:03:04.422378815 +0000 UTC m=+0.434695886 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target) Dec 2 04:03:04 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:03:11 localhost podman[103533]: 2025-12-02 09:03:11.095904456 +0000 UTC m=+0.092918643 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, release=1761123044, container_name=collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:03:11 localhost podman[103533]: 2025-12-02 09:03:11.132603448 +0000 UTC m=+0.129617615 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Dec 2 04:03:11 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:03:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:03:14 localhost podman[103553]: 2025-12-02 09:03:14.093167524 +0000 UTC m=+0.090093517 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 04:03:14 localhost podman[103553]: 2025-12-02 09:03:14.129926118 +0000 UTC m=+0.126852131 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:03:14 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:03:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:03:26 localhost podman[103573]: 2025-12-02 09:03:26.090332063 +0000 UTC m=+0.091932663 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 04:03:26 localhost podman[103573]: 2025-12-02 09:03:26.259825196 +0000 UTC m=+0.261425736 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:03:26 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:03:30 localhost podman[103602]: 2025-12-02 09:03:30.090171264 +0000 UTC m=+0.092244952 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 04:03:30 localhost podman[103602]: 2025-12-02 09:03:30.101511901 +0000 UTC m=+0.103585609 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:03:30 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:03:30 localhost podman[103603]: 2025-12-02 09:03:30.198410954 +0000 UTC m=+0.194602682 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git) Dec 2 04:03:30 localhost podman[103604]: 2025-12-02 09:03:30.248771054 +0000 UTC m=+0.242213998 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:03:30 localhost podman[103603]: 2025-12-02 09:03:30.277489602 +0000 UTC m=+0.273681370 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:03:30 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:03:30 localhost podman[103604]: 2025-12-02 09:03:30.306851061 +0000 UTC m=+0.300293985 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:03:30 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:03:34 localhost podman[103673]: 2025-12-02 09:03:34.068832416 +0000 UTC m=+0.076786270 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:03:34 localhost podman[103673]: 2025-12-02 09:03:34.121043893 +0000 UTC m=+0.128997777 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:03:34 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:03:34 localhost podman[103674]: 2025-12-02 09:03:34.134915457 +0000 UTC m=+0.138479886 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 04:03:34 localhost systemd[1]: tmp-crun.74QgRZ.mount: Deactivated successfully. Dec 2 04:03:34 localhost podman[103675]: 2025-12-02 09:03:34.197636606 +0000 UTC m=+0.196352676 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 04:03:34 localhost podman[103674]: 2025-12-02 09:03:34.214369858 +0000 UTC m=+0.217934267 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 2 04:03:34 localhost podman[103674]: unhealthy Dec 2 04:03:34 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:34 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:03:34 localhost podman[103675]: 2025-12-02 09:03:34.241941841 +0000 UTC m=+0.240657911 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_controller) Dec 2 04:03:34 localhost podman[103675]: unhealthy Dec 2 04:03:34 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:34 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:03:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:03:35 localhost podman[103739]: 2025-12-02 09:03:35.051614794 +0000 UTC m=+0.062006419 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:03:35 localhost podman[103739]: 2025-12-02 09:03:35.358928432 +0000 UTC m=+0.369320057 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:03:35 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:03:42 localhost podman[103761]: 2025-12-02 09:03:42.081540745 +0000 UTC m=+0.084866116 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:03:42 localhost podman[103761]: 2025-12-02 09:03:42.089149898 +0000 UTC m=+0.092475229 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:03:42 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:03:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:03:45 localhost podman[103780]: 2025-12-02 09:03:45.085808577 +0000 UTC m=+0.088331542 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:03:45 localhost podman[103780]: 2025-12-02 09:03:45.099104704 +0000 UTC m=+0.101627669 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 04:03:45 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:03:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:03:57 localhost podman[103799]: 2025-12-02 09:03:57.071943528 +0000 UTC m=+0.079537113 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:03:57 localhost podman[103799]: 2025-12-02 09:03:57.26590331 +0000 UTC m=+0.273496875 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z) Dec 2 04:03:57 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:04:01 localhost podman[103828]: 2025-12-02 09:04:01.07079194 +0000 UTC m=+0.076096719 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 2 04:04:01 localhost podman[103829]: 2025-12-02 09:04:01.089506081 +0000 UTC m=+0.087496316 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12) Dec 2 04:04:01 localhost podman[103828]: 2025-12-02 09:04:01.104976065 +0000 UTC m=+0.110280824 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 04:04:01 localhost podman[103829]: 2025-12-02 09:04:01.115792316 +0000 UTC m=+0.113782561 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:04:01 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:04:01 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:04:01 localhost podman[103833]: 2025-12-02 09:04:01.195041699 +0000 UTC m=+0.188276739 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Dec 2 04:04:01 localhost podman[103833]: 2025-12-02 09:04:01.22190053 +0000 UTC m=+0.215135630 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 04:04:01 localhost podman[103833]: unhealthy Dec 2 04:04:01 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:01 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:04:05 localhost podman[103903]: 2025-12-02 09:04:05.063710898 +0000 UTC m=+0.068986042 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true) Dec 2 04:04:05 localhost podman[103902]: 2025-12-02 09:04:05.129141498 +0000 UTC m=+0.134085992 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:04:05 localhost podman[103904]: 2025-12-02 09:04:05.091160647 +0000 UTC m=+0.090761837 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:04:05 localhost podman[103903]: 2025-12-02 09:04:05.150130281 +0000 UTC m=+0.155405445 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 2 04:04:05 localhost podman[103903]: unhealthy Dec 2 04:04:05 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:05 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:04:05 localhost podman[103902]: 2025-12-02 09:04:05.162606972 +0000 UTC m=+0.167551406 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:04:05 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully. Dec 2 04:04:05 localhost podman[103904]: 2025-12-02 09:04:05.176051793 +0000 UTC m=+0.175653013 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:04:05 localhost podman[103904]: unhealthy Dec 2 04:04:05 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:05 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:04:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:04:06 localhost systemd[1]: tmp-crun.CCnxJk.mount: Deactivated successfully. Dec 2 04:04:06 localhost podman[103964]: 2025-12-02 09:04:06.067338432 +0000 UTC m=+0.075548131 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:04:06 localhost podman[103964]: 2025-12-02 09:04:06.443991162 +0000 UTC m=+0.452200781 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 2 04:04:06 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:04:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:04:13 localhost systemd[1]: tmp-crun.2cf6hE.mount: Deactivated successfully. Dec 2 04:04:13 localhost podman[104062]: 2025-12-02 09:04:13.095367256 +0000 UTC m=+0.099520846 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd) Dec 2 04:04:13 localhost podman[104062]: 2025-12-02 09:04:13.106186197 +0000 UTC m=+0.110339807 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:04:13 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:04:16 localhost podman[104083]: 2025-12-02 09:04:16.069080892 +0000 UTC m=+0.075904252 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 04:04:16 localhost podman[104083]: 2025-12-02 09:04:16.082784001 +0000 UTC m=+0.089607361 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container) Dec 2 04:04:16 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:04:20 localhost sshd[104102]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:04:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:04:21 localhost recover_tripleo_nova_virtqemud[104105]: 61907 Dec 2 04:04:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:04:21 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:04:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:04:28 localhost podman[104106]: 2025-12-02 09:04:28.088679767 +0000 UTC m=+0.083289709 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Dec 2 04:04:28 localhost podman[104106]: 2025-12-02 09:04:28.309232532 +0000 UTC m=+0.303842494 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:04:28 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:04:32 localhost systemd[1]: tmp-crun.QNoUyI.mount: Deactivated successfully. Dec 2 04:04:32 localhost podman[104136]: 2025-12-02 09:04:32.07387235 +0000 UTC m=+0.073450668 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:04:32 localhost podman[104135]: 2025-12-02 09:04:32.116561035 +0000 UTC m=+0.116395041 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 2 04:04:32 localhost podman[104136]: 2025-12-02 09:04:32.146862292 +0000 UTC m=+0.146440580 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute) Dec 2 04:04:32 localhost podman[104135]: 2025-12-02 09:04:32.154873516 +0000 UTC m=+0.154707522 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4) Dec 2 04:04:32 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:04:32 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:04:32 localhost podman[104137]: 2025-12-02 09:04:32.232106949 +0000 UTC m=+0.227082116 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Dec 2 04:04:32 localhost podman[104137]: 2025-12-02 09:04:32.258839327 +0000 UTC m=+0.253814454 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1761123044) Dec 2 04:04:32 localhost podman[104137]: unhealthy Dec 2 04:04:32 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:32 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:04:36 localhost systemd[1]: tmp-crun.tFmtG1.mount: Deactivated successfully. Dec 2 04:04:36 localhost podman[104211]: 2025-12-02 09:04:36.08612571 +0000 UTC m=+0.085536758 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 04:04:36 localhost podman[104210]: 2025-12-02 09:04:36.065367884 +0000 UTC m=+0.069730674 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:04:36 localhost podman[104212]: 2025-12-02 09:04:36.125120852 +0000 UTC m=+0.122017733 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:04:36 localhost podman[104211]: 2025-12-02 09:04:36.124973428 +0000 UTC m=+0.124384476 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Dec 2 04:04:36 localhost podman[104211]: unhealthy Dec 2 04:04:36 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:36 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:04:36 localhost podman[104210]: 2025-12-02 09:04:36.144096152 +0000 UTC m=+0.148458942 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044) Dec 2 04:04:36 localhost podman[104210]: unhealthy Dec 2 04:04:36 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:36 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:04:36 localhost podman[104212]: 2025-12-02 09:04:36.194328828 +0000 UTC m=+0.191225679 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public) Dec 2 04:04:36 localhost podman[104212]: unhealthy Dec 2 04:04:36 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:36 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:04:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:04:37 localhost podman[104268]: 2025-12-02 09:04:37.068539145 +0000 UTC m=+0.076259493 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Dec 2 04:04:37 localhost podman[104268]: 2025-12-02 09:04:37.385609403 +0000 UTC m=+0.393329781 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:04:37 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:04:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:04:44 localhost podman[104292]: 2025-12-02 09:04:44.081701743 +0000 UTC m=+0.084678920 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:04:44 localhost podman[104292]: 2025-12-02 09:04:44.114897419 +0000 UTC m=+0.117874596 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:04:44 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:04:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:04:47 localhost podman[104312]: 2025-12-02 09:04:47.06030048 +0000 UTC m=+0.067116364 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-type=git) Dec 2 04:04:47 localhost podman[104312]: 2025-12-02 09:04:47.094243768 +0000 UTC m=+0.101059662 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:04:47 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:04:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:04:59 localhost systemd[1]: tmp-crun.T98Prj.mount: Deactivated successfully. Dec 2 04:04:59 localhost podman[104332]: 2025-12-02 09:04:59.085350472 +0000 UTC m=+0.092587713 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd) Dec 2 04:04:59 localhost podman[104332]: 2025-12-02 09:04:59.306170676 +0000 UTC m=+0.313407897 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 04:04:59 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:05:03 localhost podman[104361]: 2025-12-02 09:05:03.086851082 +0000 UTC m=+0.088658462 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z) Dec 2 04:05:03 localhost podman[104361]: 2025-12-02 09:05:03.094213798 +0000 UTC m=+0.096021228 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:05:03 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:05:03 localhost podman[104363]: 2025-12-02 09:05:03.135882603 +0000 UTC m=+0.131287337 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:05:03 localhost podman[104362]: 2025-12-02 09:05:03.192485153 +0000 UTC m=+0.190460766 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 04:05:03 localhost podman[104363]: 2025-12-02 09:05:03.215061604 +0000 UTC m=+0.210466338 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:05:03 localhost podman[104363]: unhealthy Dec 2 04:05:03 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:03 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:05:03 localhost podman[104362]: 2025-12-02 09:05:03.269061955 +0000 UTC m=+0.267037548 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:05:03 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:05:07 localhost systemd[1]: tmp-crun.jgFNvW.mount: Deactivated successfully. Dec 2 04:05:07 localhost podman[104433]: 2025-12-02 09:05:07.084987671 +0000 UTC m=+0.085247348 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, release=1761123044, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 2 04:05:07 localhost podman[104433]: 2025-12-02 09:05:07.133906478 +0000 UTC m=+0.134166125 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:05:07 localhost podman[104433]: unhealthy Dec 2 04:05:07 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:07 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:05:07 localhost podman[104435]: 2025-12-02 09:05:07.178741899 +0000 UTC m=+0.173765955 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1) Dec 2 04:05:07 localhost podman[104434]: 2025-12-02 09:05:07.133724392 +0000 UTC m=+0.132569126 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 04:05:07 localhost podman[104435]: 2025-12-02 09:05:07.216858134 +0000 UTC m=+0.211882170 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 2 04:05:07 localhost podman[104434]: 2025-12-02 09:05:07.217419142 +0000 UTC m=+0.216263896 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:05:07 localhost podman[104435]: unhealthy Dec 2 04:05:07 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:07 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:05:07 localhost podman[104434]: unhealthy Dec 2 04:05:07 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:07 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:05:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:05:07 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:05:08 localhost recover_tripleo_nova_virtqemud[104495]: 61907 Dec 2 04:05:08 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:05:08 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:05:08 localhost podman[104493]: 2025-12-02 09:05:08.05830651 +0000 UTC m=+0.065224617 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 2 04:05:08 localhost podman[104493]: 2025-12-02 09:05:08.392992425 +0000 UTC m=+0.399910572 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044) Dec 2 04:05:08 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:05:15 localhost podman[104596]: 2025-12-02 09:05:15.098377372 +0000 UTC m=+0.096369509 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.) Dec 2 04:05:15 localhost podman[104596]: 2025-12-02 09:05:15.107900943 +0000 UTC m=+0.105893090 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z) Dec 2 04:05:15 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:05:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:05:18 localhost podman[104616]: 2025-12-02 09:05:18.072314866 +0000 UTC m=+0.076436918 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, tcib_managed=true) Dec 2 04:05:18 localhost podman[104616]: 2025-12-02 09:05:18.107329928 +0000 UTC m=+0.111452020 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Dec 2 04:05:18 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:05:19 localhost sshd[104636]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:05:19 localhost systemd-logind[760]: New session 35 of user zuul. Dec 2 04:05:19 localhost systemd[1]: Started Session 35 of User zuul. Dec 2 04:05:20 localhost python3.9[104731]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:21 localhost python3.9[104825]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:21 localhost python3.9[104918]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:22 localhost python3.9[105012]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:23 localhost python3.9[105105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:23 localhost python3.9[105196]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 2 04:05:25 localhost python3.9[105286]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:26 localhost python3.9[105378]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 2 04:05:27 localhost python3.9[105468]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:05:28 localhost python3.9[105516]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:05:28 localhost systemd[1]: session-35.scope: Deactivated successfully. Dec 2 04:05:28 localhost systemd[1]: session-35.scope: Consumed 4.465s CPU time. Dec 2 04:05:28 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Dec 2 04:05:28 localhost systemd-logind[760]: Removed session 35. Dec 2 04:05:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:05:30 localhost podman[105532]: 2025-12-02 09:05:30.081652216 +0000 UTC m=+0.088358673 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_id=tripleo_step1, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr) Dec 2 04:05:30 localhost podman[105532]: 2025-12-02 09:05:30.274801783 +0000 UTC m=+0.281508230 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:05:30 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:05:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20671 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CA8800000000001030307) Dec 2 04:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:05:34 localhost podman[105562]: 2025-12-02 09:05:34.106866033 +0000 UTC m=+0.110835301 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, release=1761123044, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 04:05:34 localhost podman[105562]: 2025-12-02 09:05:34.157778529 +0000 UTC m=+0.161747777 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 2 04:05:34 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully. Dec 2 04:05:34 localhost podman[105563]: 2025-12-02 09:05:34.168099745 +0000 UTC m=+0.168790072 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, architecture=x86_64) Dec 2 04:05:34 localhost podman[105563]: 2025-12-02 09:05:34.217529858 +0000 UTC m=+0.218220175 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12) Dec 2 04:05:34 localhost podman[105563]: unhealthy Dec 2 04:05:34 localhost podman[105561]: 2025-12-02 09:05:34.223575882 +0000 UTC m=+0.226759206 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:05:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:05:34 localhost podman[105561]: 2025-12-02 09:05:34.233885637 +0000 UTC m=+0.237068911 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 04:05:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20672 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CACA30000000001030307) Dec 2 04:05:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20673 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CB4A20000000001030307) Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:05:38 localhost podman[105637]: 2025-12-02 09:05:38.06475946 +0000 UTC m=+0.063979298 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_controller) Dec 2 04:05:38 localhost podman[105636]: 2025-12-02 09:05:38.135186263 +0000 UTC m=+0.134584637 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:05:38 localhost systemd[1]: tmp-crun.v9VtVW.mount: Deactivated successfully. Dec 2 04:05:38 localhost podman[105635]: 2025-12-02 09:05:38.173246778 +0000 UTC m=+0.175195440 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:05:38 localhost podman[105636]: 2025-12-02 09:05:38.203623507 +0000 UTC m=+0.203021851 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 2 04:05:38 localhost podman[105636]: unhealthy Dec 2 04:05:38 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:38 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:05:38 localhost podman[105635]: 2025-12-02 09:05:38.216040936 +0000 UTC m=+0.217989548 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_compute, version=17.1.12) Dec 2 04:05:38 localhost podman[105635]: unhealthy Dec 2 04:05:38 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:38 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:05:38 localhost podman[105637]: 2025-12-02 09:05:38.256139892 +0000 UTC m=+0.255359760 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:05:38 localhost podman[105637]: unhealthy Dec 2 04:05:38 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:38 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:05:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10670 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CBB560000000001030307) Dec 2 04:05:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:05:39 localhost podman[105698]: 2025-12-02 09:05:39.067636551 +0000 UTC m=+0.075838071 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Dec 2 04:05:39 localhost podman[105698]: 2025-12-02 09:05:39.402836103 +0000 UTC m=+0.411037633 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute) Dec 2 04:05:39 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10671 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CBF620000000001030307) Dec 2 04:05:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20674 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC4630000000001030307) Dec 2 04:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10672 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC7620000000001030307) Dec 2 04:05:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30749 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC9960000000001030307) Dec 2 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30750 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CCDA20000000001030307) Dec 2 04:05:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30751 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD5A20000000001030307) Dec 2 04:05:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10673 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD7220000000001030307) Dec 2 04:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39343 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD8DE0000000001030307) Dec 2 04:05:46 localhost podman[105721]: 2025-12-02 09:05:46.07133864 +0000 UTC m=+0.075660465 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git) Dec 2 04:05:46 localhost podman[105721]: 2025-12-02 09:05:46.107939719 +0000 UTC m=+0.112261584 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, tcib_managed=true, container_name=collectd, vcs-type=git, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 04:05:46 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39344 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CDCE30000000001030307) Dec 2 04:05:48 localhost sshd[105741]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:05:48 localhost systemd-logind[760]: New session 36 of user zuul. Dec 2 04:05:48 localhost systemd[1]: Started Session 36 of User zuul. Dec 2 04:05:48 localhost systemd[1]: tmp-crun.x0Mbcp.mount: Deactivated successfully. Dec 2 04:05:48 localhost podman[105743]: 2025-12-02 09:05:48.801395236 +0000 UTC m=+0.092771839 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z) Dec 2 04:05:48 localhost podman[105743]: 2025-12-02 09:05:48.836968833 +0000 UTC m=+0.128345436 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:05:48 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39345 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE4E20000000001030307) Dec 2 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20675 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE5220000000001030307) Dec 2 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30752 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE5630000000001030307) Dec 2 04:05:49 localhost python3.9[105854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:05:49 localhost systemd[1]: Reloading. Dec 2 04:05:49 localhost systemd-rc-local-generator[105875]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:05:49 localhost systemd-sysv-generator[105878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:05:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:05:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35764 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE8A00000000001030307) Dec 2 04:05:50 localhost python3.9[105980]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:05:51 localhost network[105997]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:05:51 localhost network[105998]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:05:51 localhost network[105999]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:05:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35765 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CECA20000000001030307) Dec 2 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35766 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF4A30000000001030307) Dec 2 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39346 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF4A30000000001030307) Dec 2 04:05:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10674 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF7230000000001030307) Dec 2 04:05:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35767 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D04620000000001030307) Dec 2 04:05:57 localhost python3.9[106197]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:05:57 localhost network[106214]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:05:57 localhost network[106215]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:05:57 localhost network[106216]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:05:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:06:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:06:00 localhost podman[106314]: 2025-12-02 09:06:00.432351145 +0000 UTC m=+0.097910276 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 04:06:00 localhost podman[106314]: 2025-12-02 09:06:00.653864289 +0000 UTC m=+0.319423420 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:06:00 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39347 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D15220000000001030307) Dec 2 04:06:01 localhost python3.9[106447]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:06:01 localhost systemd[1]: Reloading. Dec 2 04:06:01 localhost systemd-sysv-generator[106480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:06:01 localhost systemd-rc-local-generator[106473]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:06:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:06:02 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 2 04:06:02 localhost systemd[1]: tmp-crun.7ZhWyg.mount: Deactivated successfully. Dec 2 04:06:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45110 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D1DB10000000001030307) Dec 2 04:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:06:04 localhost podman[106502]: Error: container 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae is not running Dec 2 04:06:04 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=125/n/a Dec 2 04:06:04 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'. Dec 2 04:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:06:04 localhost systemd[1]: tmp-crun.jRpwZV.mount: Deactivated successfully. Dec 2 04:06:04 localhost podman[106503]: 2025-12-02 09:06:04.408578974 +0000 UTC m=+0.158512830 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, distribution-scope=public) Dec 2 04:06:04 localhost podman[106523]: 2025-12-02 09:06:04.466383101 +0000 UTC m=+0.115086401 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64) Dec 2 04:06:04 localhost podman[106503]: 2025-12-02 09:06:04.474938902 +0000 UTC m=+0.224872748 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:06:04 localhost podman[106503]: unhealthy Dec 2 04:06:04 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:04 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:06:04 localhost podman[106523]: 2025-12-02 09:06:04.5049399 +0000 UTC m=+0.153643190 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:06:04 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45111 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D21A20000000001030307) Dec 2 04:06:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45112 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D29A20000000001030307) Dec 2 04:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:06:08 localhost podman[106555]: 2025-12-02 09:06:08.597348341 +0000 UTC m=+0.090789478 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:06:08 localhost podman[106554]: 2025-12-02 09:06:08.643271705 +0000 UTC m=+0.143768488 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git) Dec 2 04:06:08 localhost podman[106555]: 2025-12-02 09:06:08.664445853 +0000 UTC m=+0.157886940 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:06:08 localhost podman[106555]: unhealthy Dec 2 04:06:08 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:08 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:06:08 localhost podman[106554]: 2025-12-02 09:06:08.693927905 +0000 UTC m=+0.194424688 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:06:08 localhost podman[106554]: unhealthy Dec 2 04:06:08 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:08 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:06:08 localhost systemd[1]: tmp-crun.29fWcx.mount: Deactivated successfully. Dec 2 04:06:08 localhost podman[106561]: 2025-12-02 09:06:08.750735742 +0000 UTC m=+0.237983749 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 04:06:08 localhost podman[106561]: 2025-12-02 09:06:08.768835745 +0000 UTC m=+0.256083782 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 04:06:08 localhost podman[106561]: unhealthy Dec 2 04:06:08 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:08 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:06:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:06:10 localhost podman[106619]: 2025-12-02 09:06:10.075041885 +0000 UTC m=+0.078542724 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:06:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10675 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D37220000000001030307) Dec 2 04:06:10 localhost podman[106619]: 2025-12-02 09:06:10.446777254 +0000 UTC m=+0.450278023 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044) Dec 2 04:06:10 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52301 DF PROTO=TCP SPT=34734 DPT=9105 SEQ=1251079634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D42E20000000001030307) Dec 2 04:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55973 DF PROTO=TCP SPT=33098 DPT=9102 SEQ=469635889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D4E0E0000000001030307) Dec 2 04:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:06:16 localhost podman[106720]: 2025-12-02 09:06:16.583660764 +0000 UTC m=+0.091082758 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:06:16 localhost podman[106720]: 2025-12-02 09:06:16.621945515 +0000 UTC m=+0.129367589 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:06:16 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:06:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45114 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D59220000000001030307) Dec 2 04:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:06:19 localhost podman[106739]: 2025-12-02 09:06:19.077182055 +0000 UTC m=+0.077334587 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 04:06:19 localhost podman[106739]: 2025-12-02 09:06:19.114731964 +0000 UTC m=+0.114884475 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Dec 2 04:06:19 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:06:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35769 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D65220000000001030307) Dec 2 04:06:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5822 DF PROTO=TCP SPT=59090 DPT=9101 SEQ=3229159924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D79A30000000001030307) Dec 2 04:06:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:06:30 localhost podman[106758]: 2025-12-02 09:06:30.836318593 +0000 UTC m=+0.084142594 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 2 04:06:31 localhost podman[106758]: 2025-12-02 09:06:31.038964891 +0000 UTC m=+0.286788882 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible) Dec 2 04:06:31 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55977 DF PROTO=TCP SPT=33098 DPT=9102 SEQ=469635889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D8B220000000001030307) Dec 2 04:06:33 localhost sshd[106786]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:06:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50034 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D92E00000000001030307) Dec 2 04:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:06:34 localhost podman[106788]: Error: container 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae is not running Dec 2 04:06:34 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=125/n/a Dec 2 04:06:34 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'. Dec 2 04:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50035 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D96E20000000001030307) Dec 2 04:06:34 localhost systemd[1]: tmp-crun.9omwLL.mount: Deactivated successfully. Dec 2 04:06:34 localhost podman[106812]: 2025-12-02 09:06:34.715779092 +0000 UTC m=+0.078719269 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:06:34 localhost podman[106789]: 2025-12-02 09:06:34.682850475 +0000 UTC m=+0.138164967 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:06:34 localhost podman[106812]: 2025-12-02 09:06:34.751851545 +0000 UTC m=+0.114791722 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:06:34 localhost podman[106789]: 2025-12-02 09:06:34.766001788 +0000 UTC m=+0.221316250 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:06:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:06:34 localhost podman[106789]: unhealthy Dec 2 04:06:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:34 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:06:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50036 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D9EE20000000001030307) Dec 2 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:06:38 localhost systemd[1]: tmp-crun.sA3piG.mount: Deactivated successfully. Dec 2 04:06:38 localhost podman[106848]: 2025-12-02 09:06:38.825360809 +0000 UTC m=+0.075278492 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:06:38 localhost podman[106878]: 2025-12-02 09:06:38.913414283 +0000 UTC m=+0.065467024 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:06:38 localhost podman[106847]: 2025-12-02 09:06:38.884702635 +0000 UTC m=+0.133781853 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:06:38 localhost podman[106878]: 2025-12-02 09:06:38.948293659 +0000 UTC m=+0.100346420 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 2 04:06:38 localhost podman[106878]: unhealthy Dec 2 04:06:38 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:38 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:06:38 localhost podman[106848]: 2025-12-02 09:06:38.965258438 +0000 UTC m=+0.215176071 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:06:38 localhost podman[106848]: unhealthy Dec 2 04:06:38 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:38 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:06:39 localhost podman[106847]: 2025-12-02 09:06:39.015732751 +0000 UTC m=+0.264812009 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 04:06:39 localhost podman[106847]: unhealthy Dec 2 04:06:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:06:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4215 DF PROTO=TCP SPT=58242 DPT=9100 SEQ=4061684492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DAD230000000001030307) Dec 2 04:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:06:41 localhost podman[106908]: 2025-12-02 09:06:41.081235142 +0000 UTC m=+0.082932108 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 04:06:41 localhost podman[106908]: 2025-12-02 09:06:41.422874701 +0000 UTC m=+0.424571677 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4) Dec 2 04:06:41 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41522 DF PROTO=TCP SPT=35936 DPT=9105 SEQ=2769818466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DB7E20000000001030307) Dec 2 04:06:44 localhost podman[106488]: time="2025-12-02T09:06:44Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Dec 2 04:06:44 localhost systemd[1]: libpod-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: libpod-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Consumed 5.940s CPU time. Dec 2 04:06:44 localhost podman[106488]: 2025-12-02 09:06:44.107073723 +0000 UTC m=+42.083374370 container stop 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 04:06:44 localhost podman[106488]: 2025-12-02 09:06:44.139775742 +0000 UTC m=+42.116076349 container died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute) Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae. Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory Dec 2 04:06:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae-userdata-shm.mount: Deactivated successfully. Dec 2 04:06:44 localhost podman[106488]: 2025-12-02 09:06:44.261055772 +0000 UTC m=+42.237356379 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:06:44 localhost podman[106488]: ceilometer_agent_compute Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: No such file or directory Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory Dec 2 04:06:44 localhost podman[106932]: 2025-12-02 09:06:44.277198935 +0000 UTC m=+0.154836746 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1) Dec 2 04:06:44 localhost systemd[1]: libpod-conmon-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: No such file or directory Dec 2 04:06:44 localhost systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory Dec 2 04:06:44 localhost podman[106947]: 2025-12-02 09:06:44.376484262 +0000 UTC m=+0.066883987 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:06:44 localhost podman[106947]: ceilometer_agent_compute Dec 2 04:06:44 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 2 04:06:44 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.116s CPU time, no IO. Dec 2 04:06:45 localhost systemd[1]: var-lib-containers-storage-overlay-a0089ea487a0d5fd991d7e6cecf5db6fae8c1b61a42816d2acbe202fbd50d575-merged.mount: Deactivated successfully. Dec 2 04:06:45 localhost python3.9[107052]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:06:45 localhost systemd[1]: Reloading. Dec 2 04:06:45 localhost systemd-rc-local-generator[107076]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:06:45 localhost systemd-sysv-generator[107080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:06:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:06:45 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Dec 2 04:06:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:ed:d2 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37490 SEQ=4000989390 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Dec 2 04:06:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:06:47 localhost podman[107108]: 2025-12-02 09:06:47.057253571 +0000 UTC m=+0.064219265 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 04:06:47 localhost podman[107108]: 2025-12-02 09:06:47.069726462 +0000 UTC m=+0.076692106 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:06:47 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50038 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DCF220000000001030307) Dec 2 04:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:06:49 localhost podman[107128]: 2025-12-02 09:06:49.338386036 +0000 UTC m=+0.089553070 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4) Dec 2 04:06:49 localhost podman[107128]: 2025-12-02 09:06:49.372554681 +0000 UTC m=+0.123721665 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 2 04:06:49 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:06:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54639 DF PROTO=TCP SPT=33674 DPT=9102 SEQ=1068771527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DDF220000000001030307) Dec 2 04:06:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:06:54 localhost recover_tripleo_nova_virtqemud[107148]: 61907 Dec 2 04:06:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:06:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:06:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17515 DF PROTO=TCP SPT=41364 DPT=9101 SEQ=3827894801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DEEE20000000001030307) Dec 2 04:07:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:07:01 localhost systemd[1]: tmp-crun.qYmsHN.mount: Deactivated successfully. Dec 2 04:07:01 localhost podman[107149]: 2025-12-02 09:07:01.307591769 +0000 UTC m=+0.068334141 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Dec 2 04:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54640 DF PROTO=TCP SPT=33674 DPT=9102 SEQ=1068771527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DFF220000000001030307) Dec 2 04:07:01 localhost podman[107149]: 2025-12-02 09:07:01.521299974 +0000 UTC m=+0.282042376 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1) Dec 2 04:07:01 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:07:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41096 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E08110000000001030307) Dec 2 04:07:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41097 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E0C220000000001030307) Dec 2 04:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:07:05 localhost podman[107179]: Error: container a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 is not running Dec 2 04:07:05 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=125/n/a Dec 2 04:07:05 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'. Dec 2 04:07:05 localhost podman[107178]: 2025-12-02 09:07:05.138128331 +0000 UTC m=+0.141299402 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:07:05 localhost podman[107178]: 2025-12-02 09:07:05.150857151 +0000 UTC m=+0.154028232 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron) Dec 2 04:07:05 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully. Dec 2 04:07:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41098 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E14220000000001030307) Dec 2 04:07:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:07:09 localhost podman[107210]: 2025-12-02 09:07:09.064092894 +0000 UTC m=+0.071446196 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container) Dec 2 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:07:09 localhost podman[107210]: 2025-12-02 09:07:09.101761216 +0000 UTC m=+0.109114508 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Dec 2 04:07:09 localhost podman[107210]: unhealthy Dec 2 04:07:09 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:09 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:07:09 localhost systemd[1]: tmp-crun.UROc6a.mount: Deactivated successfully. Dec 2 04:07:09 localhost podman[107230]: 2025-12-02 09:07:09.170598592 +0000 UTC m=+0.084985391 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=) Dec 2 04:07:09 localhost podman[107230]: 2025-12-02 09:07:09.187563321 +0000 UTC m=+0.101950110 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:07:09 localhost podman[107231]: 2025-12-02 09:07:09.20587915 +0000 UTC m=+0.116043479 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:07:09 localhost podman[107230]: unhealthy Dec 2 04:07:09 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:09 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:07:09 localhost podman[107231]: 2025-12-02 09:07:09.293661056 +0000 UTC m=+0.203825425 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 04:07:09 localhost podman[107231]: unhealthy Dec 2 04:07:09 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:09 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:07:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=369 DF PROTO=TCP SPT=58674 DPT=9100 SEQ=2170451609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E21220000000001030307) Dec 2 04:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:07:11 localhost systemd[1]: tmp-crun.b4Qa3M.mount: Deactivated successfully. Dec 2 04:07:11 localhost podman[107272]: 2025-12-02 09:07:11.833084341 +0000 UTC m=+0.087455826 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:07:12 localhost podman[107272]: 2025-12-02 09:07:12.24864227 +0000 UTC m=+0.503013815 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12) Dec 2 04:07:12 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16501 DF PROTO=TCP SPT=46310 DPT=9105 SEQ=1984692707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E2D220000000001030307) Dec 2 04:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11351 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E386E0000000001030307) Dec 2 04:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:07:17 localhost podman[107372]: 2025-12-02 09:07:17.338717362 +0000 UTC m=+0.083964409 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team) Dec 2 04:07:17 localhost podman[107372]: 2025-12-02 09:07:17.373761435 +0000 UTC m=+0.119008472 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z) Dec 2 04:07:17 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully. Dec 2 04:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11353 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E44620000000001030307) Dec 2 04:07:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:07:19 localhost podman[107392]: 2025-12-02 09:07:19.574549212 +0000 UTC m=+0.077314995 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z) Dec 2 04:07:19 localhost podman[107392]: 2025-12-02 09:07:19.612965517 +0000 UTC m=+0.115731320 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 04:07:19 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully. Dec 2 04:07:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17517 DF PROTO=TCP SPT=41364 DPT=9101 SEQ=3827894801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E4F220000000001030307) Dec 2 04:07:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17822 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=1492685268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E63E20000000001030307) Dec 2 04:07:27 localhost podman[107092]: time="2025-12-02T09:07:27Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Dec 2 04:07:27 localhost systemd[1]: tmp-crun.UAbWed.mount: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: libpod-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: libpod-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Consumed 6.746s CPU time. Dec 2 04:07:27 localhost podman[107092]: 2025-12-02 09:07:27.822931359 +0000 UTC m=+42.107635982 container stop a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:07:28 localhost podman[107092]: 2025-12-02 09:07:28.171678385 +0000 UTC m=+42.456383038 container died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Deactivated successfully. Dec 2 04:07:28 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497. Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory Dec 2 04:07:28 localhost podman[107092]: 2025-12-02 09:07:28.270375974 +0000 UTC m=+42.555080597 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 2 04:07:28 localhost podman[107092]: ceilometer_agent_ipmi Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: No such file or directory Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory Dec 2 04:07:28 localhost podman[107413]: 2025-12-02 09:07:28.284122324 +0000 UTC m=+0.126114979 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 2 04:07:28 localhost systemd[1]: libpod-conmon-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Deactivated successfully. Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: No such file or directory Dec 2 04:07:28 localhost systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory Dec 2 04:07:28 localhost podman[107425]: 2025-12-02 09:07:28.388904019 +0000 UTC m=+0.071296782 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Dec 2 04:07:28 localhost podman[107425]: ceilometer_agent_ipmi Dec 2 04:07:28 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Dec 2 04:07:28 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Dec 2 04:07:28 localhost systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully. Dec 2 04:07:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:29 localhost python3.9[107529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:29 localhost systemd[1]: Reloading. Dec 2 04:07:29 localhost systemd-sysv-generator[107558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:29 localhost systemd-rc-local-generator[107553]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:29 localhost systemd[1]: Stopping collectd container... Dec 2 04:07:31 localhost systemd[1]: libpod-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Deactivated successfully. Dec 2 04:07:31 localhost systemd[1]: libpod-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Consumed 2.164s CPU time. Dec 2 04:07:31 localhost podman[107570]: 2025-12-02 09:07:31.481372277 +0000 UTC m=+1.970685611 container died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Deactivated successfully. Dec 2 04:07:31 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c. Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory Dec 2 04:07:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:31 localhost podman[107570]: 2025-12-02 09:07:31.54259163 +0000 UTC m=+2.031904884 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:07:31 localhost podman[107570]: collectd Dec 2 04:07:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11355 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E75220000000001030307) Dec 2 04:07:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: No such file or directory Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory Dec 2 04:07:31 localhost podman[107583]: 2025-12-02 09:07:31.587758171 +0000 UTC m=+0.091569702 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 04:07:31 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:31 localhost systemd[1]: libpod-conmon-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Deactivated successfully. Dec 2 04:07:31 localhost podman[107635]: error opening file `/run/crun/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c/status`: No such file or directory Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: No such file or directory Dec 2 04:07:31 localhost systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory Dec 2 04:07:31 localhost podman[107608]: 2025-12-02 09:07:31.708037569 +0000 UTC m=+0.083175225 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:07:31 localhost podman[107608]: collectd Dec 2 04:07:31 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Dec 2 04:07:31 localhost systemd[1]: Stopped collectd container. Dec 2 04:07:31 localhost podman[107599]: 2025-12-02 09:07:31.676668389 +0000 UTC m=+0.102008910 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 2 04:07:31 localhost podman[107599]: 2025-12-02 09:07:31.915833195 +0000 UTC m=+0.341173686 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:07:31 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully. Dec 2 04:07:32 localhost python3.9[107738]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:32 localhost systemd[1]: var-lib-containers-storage-overlay-c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42-merged.mount: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: Reloading. Dec 2 04:07:32 localhost systemd-sysv-generator[107769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:32 localhost systemd-rc-local-generator[107764]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:32 localhost systemd[1]: Stopping iscsid container... Dec 2 04:07:32 localhost systemd[1]: libpod-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: libpod-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Consumed 1.150s CPU time. Dec 2 04:07:32 localhost podman[107778]: 2025-12-02 09:07:32.959973819 +0000 UTC m=+0.096472073 container died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 04:07:32 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b. Dec 2 04:07:32 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory Dec 2 04:07:32 localhost systemd[1]: tmp-crun.fGrCVn.mount: Deactivated successfully. Dec 2 04:07:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:33 localhost podman[107778]: 2025-12-02 09:07:33.012532675 +0000 UTC m=+0.149030929 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z) Dec 2 04:07:33 localhost podman[107778]: iscsid Dec 2 04:07:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: No such file or directory Dec 2 04:07:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory Dec 2 04:07:33 localhost podman[107790]: 2025-12-02 09:07:33.055748617 +0000 UTC m=+0.083545957 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 2 04:07:33 localhost systemd[1]: libpod-conmon-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Deactivated successfully. Dec 2 04:07:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: No such file or directory Dec 2 04:07:33 localhost systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory Dec 2 04:07:33 localhost podman[107806]: 2025-12-02 09:07:33.163876934 +0000 UTC m=+0.075637344 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 04:07:33 localhost podman[107806]: iscsid Dec 2 04:07:33 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Dec 2 04:07:33 localhost systemd[1]: Stopped iscsid container. Dec 2 04:07:33 localhost systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully. Dec 2 04:07:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33014 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E7D410000000001030307) Dec 2 04:07:33 localhost python3.9[107911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:33 localhost systemd[1]: Reloading. Dec 2 04:07:34 localhost systemd-rc-local-generator[107935]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:34 localhost systemd-sysv-generator[107941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:34 localhost systemd[1]: Stopping logrotate_crond container... Dec 2 04:07:34 localhost systemd[1]: libpod-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: libpod-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Consumed 1.116s CPU time. Dec 2 04:07:34 localhost podman[107951]: 2025-12-02 09:07:34.401092893 +0000 UTC m=+0.088745806 container died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible) Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae. Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory Dec 2 04:07:34 localhost podman[107951]: 2025-12-02 09:07:34.460702446 +0000 UTC m=+0.148355359 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Dec 2 04:07:34 localhost podman[107951]: logrotate_crond Dec 2 04:07:34 localhost systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: No such file or directory Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory Dec 2 04:07:34 localhost podman[107964]: 2025-12-02 09:07:34.511801718 +0000 UTC m=+0.099717850 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:07:34 localhost systemd[1]: libpod-conmon-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Deactivated successfully. Dec 2 04:07:34 localhost podman[107993]: error opening file `/run/crun/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae/status`: No such file or directory Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: No such file or directory Dec 2 04:07:34 localhost systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory Dec 2 04:07:34 localhost podman[107981]: 2025-12-02 09:07:34.629284161 +0000 UTC m=+0.084874876 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron) Dec 2 04:07:34 localhost podman[107981]: logrotate_crond Dec 2 04:07:34 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: Stopped logrotate_crond container. Dec 2 04:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33015 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E81620000000001030307) Dec 2 04:07:35 localhost python3.9[108086]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:35 localhost systemd[1]: Reloading. Dec 2 04:07:35 localhost systemd-rc-local-generator[108113]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:35 localhost systemd-sysv-generator[108119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:35 localhost systemd[1]: Stopping metrics_qdr container... Dec 2 04:07:35 localhost systemd[1]: tmp-crun.Plzjlz.mount: Deactivated successfully. Dec 2 04:07:35 localhost kernel: qdrouterd[54544]: segfault at 0 ip 00007f9dd87187cb sp 00007ffeb7d21e60 error 4 in libc.so.6[7f9dd86b5000+175000] Dec 2 04:07:35 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Dec 2 04:07:35 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Dec 2 04:07:35 localhost systemd[1]: Started Process Core Dump (PID 108140/UID 0). Dec 2 04:07:36 localhost systemd-coredump[108141]: Resource limits disable core dumping for process 54544 (qdrouterd). Dec 2 04:07:36 localhost systemd-coredump[108141]: Process 54544 (qdrouterd) of user 42465 dumped core. Dec 2 04:07:36 localhost systemd[1]: libpod-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Deactivated successfully. Dec 2 04:07:36 localhost systemd[1]: libpod-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Consumed 29.209s CPU time. Dec 2 04:07:36 localhost systemd[1]: systemd-coredump@0-108140-0.service: Deactivated successfully. Dec 2 04:07:36 localhost podman[108127]: 2025-12-02 09:07:36.013406624 +0000 UTC m=+0.218432211 container died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1761123044, build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Deactivated successfully. Dec 2 04:07:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7. Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory Dec 2 04:07:36 localhost podman[108127]: 2025-12-02 09:07:36.076941077 +0000 UTC m=+0.281966584 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.buildah.version=1.41.4, version=17.1.12) Dec 2 04:07:36 localhost podman[108127]: metrics_qdr Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: No such file or directory Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory Dec 2 04:07:36 localhost podman[108145]: 2025-12-02 09:07:36.111998459 +0000 UTC m=+0.086130215 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:07:36 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Dec 2 04:07:36 localhost systemd[1]: libpod-conmon-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Deactivated successfully. Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: No such file or directory Dec 2 04:07:36 localhost systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory Dec 2 04:07:36 localhost podman[108162]: 2025-12-02 09:07:36.228206263 +0000 UTC m=+0.077643796 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:07:36 localhost podman[108162]: metrics_qdr Dec 2 04:07:36 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Dec 2 04:07:36 localhost systemd[1]: Stopped metrics_qdr container. Dec 2 04:07:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33016 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E89620000000001030307) Dec 2 04:07:36 localhost systemd[1]: tmp-crun.fJeRuO.mount: Deactivated successfully. Dec 2 04:07:36 localhost systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully. Dec 2 04:07:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:36 localhost python3.9[108267]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:37 localhost python3.9[108360]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:38 localhost python3.9[108453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:39 localhost python3.9[108546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:07:39 localhost systemd[1]: Reloading. Dec 2 04:07:39 localhost podman[108548]: 2025-12-02 09:07:39.261705959 +0000 UTC m=+0.108956473 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com) Dec 2 04:07:39 localhost podman[108548]: 2025-12-02 09:07:39.29769789 +0000 UTC m=+0.144948374 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 04:07:39 localhost systemd-sysv-generator[108594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:39 localhost systemd-rc-local-generator[108589]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:39 localhost podman[108548]: unhealthy Dec 2 04:07:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:39 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:39 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:07:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:07:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:07:39 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:07:39 localhost podman[108602]: Error: container 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e is not running Dec 2 04:07:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=125/n/a Dec 2 04:07:39 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:07:39 localhost podman[108603]: 2025-12-02 09:07:39.73680822 +0000 UTC m=+0.191871690 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 2 04:07:39 localhost podman[108603]: 2025-12-02 09:07:39.758964188 +0000 UTC m=+0.214027658 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:07:39 localhost podman[108603]: unhealthy Dec 2 04:07:39 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:39 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:07:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4004 DF PROTO=TCP SPT=34714 DPT=9100 SEQ=1261654211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E97220000000001030307) Dec 2 04:07:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:07:42 localhost systemd[1]: tmp-crun.Cs0bp8.mount: Deactivated successfully. Dec 2 04:07:42 localhost podman[108647]: 2025-12-02 09:07:42.845300538 +0000 UTC m=+0.097610746 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public) Dec 2 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18763 DF PROTO=TCP SPT=54336 DPT=9105 SEQ=2188115077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EA2620000000001030307) Dec 2 04:07:43 localhost podman[108647]: 2025-12-02 09:07:43.212444327 +0000 UTC m=+0.464754575 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 2 04:07:43 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:07:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41527 DF PROTO=TCP SPT=35936 DPT=9105 SEQ=2769818466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EAD220000000001030307) Dec 2 04:07:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33018 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EB9220000000001030307) Dec 2 04:07:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17824 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=1492685268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EC5220000000001030307) Dec 2 04:07:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42473 DF PROTO=TCP SPT=37852 DPT=9101 SEQ=1754784311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52ED9220000000001030307) Dec 2 04:07:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18766 DF PROTO=TCP SPT=54336 DPT=9105 SEQ=2188115077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EDB220000000001030307) Dec 2 04:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43905 DF PROTO=TCP SPT=51142 DPT=9102 SEQ=890444739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EE9220000000001030307) Dec 2 04:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52849 DF PROTO=TCP SPT=38178 DPT=9882 SEQ=556695907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EF6620000000001030307) Dec 2 04:08:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52850 DF PROTO=TCP SPT=38178 DPT=9882 SEQ=556695907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EFE620000000001030307) Dec 2 04:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:08:10 localhost podman[108672]: 2025-12-02 09:08:10.085585011 +0000 UTC m=+0.079336097 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1761123044) Dec 2 04:08:10 localhost podman[108672]: 2025-12-02 09:08:10.099239219 +0000 UTC m=+0.092990315 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Dec 2 04:08:10 localhost podman[108672]: unhealthy Dec 2 04:08:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:08:10 localhost podman[108671]: 2025-12-02 09:08:10.141738488 +0000 UTC m=+0.138945130 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:08:10 localhost podman[108671]: 2025-12-02 09:08:10.155497349 +0000 UTC m=+0.152703931 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044) Dec 2 04:08:10 localhost podman[108671]: unhealthy Dec 2 04:08:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:08:10 localhost podman[108670]: Error: container 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e is not running Dec 2 04:08:10 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=125/n/a Dec 2 04:08:10 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'. Dec 2 04:08:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17332 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=817660967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F0D230000000001030307) Dec 2 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52558 DF PROTO=TCP SPT=36662 DPT=9105 SEQ=3234899454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F17A30000000001030307) Dec 2 04:08:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:08:13 localhost podman[108721]: 2025-12-02 09:08:13.822669447 +0000 UTC m=+0.079369028 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:08:14 localhost podman[108721]: 2025-12-02 09:08:14.208098246 +0000 UTC m=+0.464797767 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 2 04:08:14 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully. Dec 2 04:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20172 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F22CE0000000001030307) Dec 2 04:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20174 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F2EE20000000001030307) Dec 2 04:08:21 localhost podman[108615]: time="2025-12-02T09:08:21Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Dec 2 04:08:21 localhost systemd[1]: libpod-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: libpod-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Consumed 28.668s CPU time. Dec 2 04:08:21 localhost podman[108615]: 2025-12-02 09:08:21.641166055 +0000 UTC m=+42.086446735 container died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e. Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory Dec 2 04:08:21 localhost systemd[1]: var-lib-containers-storage-overlay-1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e-merged.mount: Deactivated successfully. Dec 2 04:08:21 localhost podman[108615]: 2025-12-02 09:08:21.703049087 +0000 UTC m=+42.148329777 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64) Dec 2 04:08:21 localhost podman[108615]: nova_compute Dec 2 04:08:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42475 DF PROTO=TCP SPT=37852 DPT=9101 SEQ=1754784311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F39220000000001030307) Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: No such file or directory Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory Dec 2 04:08:21 localhost podman[108822]: 2025-12-02 09:08:21.762794944 +0000 UTC m=+0.114894005 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:08:21 localhost systemd[1]: libpod-conmon-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: No such file or directory Dec 2 04:08:21 localhost systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory Dec 2 04:08:21 localhost podman[108838]: 2025-12-02 09:08:21.856626234 +0000 UTC m=+0.057754767 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, container_name=nova_compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute) Dec 2 04:08:21 localhost podman[108838]: nova_compute Dec 2 04:08:21 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:08:21 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.098s CPU time, no IO. Dec 2 04:08:22 localhost python3.9[108940]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:08:22 localhost systemd[1]: Reloading. Dec 2 04:08:22 localhost systemd-rc-local-generator[108963]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:08:22 localhost systemd-sysv-generator[108969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:08:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:08:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:08:22 localhost systemd[1]: Stopping nova_migration_target container... Dec 2 04:08:22 localhost recover_tripleo_nova_virtqemud[108981]: 61907 Dec 2 04:08:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:08:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:08:23 localhost systemd[1]: libpod-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Deactivated successfully. Dec 2 04:08:23 localhost systemd[1]: libpod-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Consumed 34.846s CPU time. Dec 2 04:08:23 localhost podman[108983]: 2025-12-02 09:08:23.032561139 +0000 UTC m=+0.072212070 container died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Deactivated successfully. Dec 2 04:08:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc. Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory Dec 2 04:08:23 localhost systemd[1]: tmp-crun.B9GroG.mount: Deactivated successfully. Dec 2 04:08:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc-userdata-shm.mount: Deactivated successfully. Dec 2 04:08:23 localhost podman[108983]: 2025-12-02 09:08:23.082417214 +0000 UTC m=+0.122068095 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=) Dec 2 04:08:23 localhost podman[108983]: nova_migration_target Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: No such file or directory Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory Dec 2 04:08:23 localhost podman[108995]: 2025-12-02 09:08:23.145872244 +0000 UTC m=+0.106546729 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 2 04:08:23 localhost systemd[1]: libpod-conmon-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Deactivated successfully. Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: No such file or directory Dec 2 04:08:23 localhost systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory Dec 2 04:08:23 localhost podman[109011]: 2025-12-02 09:08:23.249227085 +0000 UTC m=+0.066437923 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4) Dec 2 04:08:23 localhost podman[109011]: nova_migration_target Dec 2 04:08:23 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Dec 2 04:08:23 localhost systemd[1]: Stopped nova_migration_target container. Dec 2 04:08:24 localhost systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully. Dec 2 04:08:24 localhost python3.9[109115]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:08:24 localhost systemd[1]: Reloading. Dec 2 04:08:24 localhost systemd-rc-local-generator[109138]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:08:24 localhost systemd-sysv-generator[109143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:08:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:08:24 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Dec 2 04:08:24 localhost systemd[1]: libpod-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope: Deactivated successfully. Dec 2 04:08:24 localhost podman[109156]: 2025-12-02 09:08:24.855099929 +0000 UTC m=+0.077789989 container died fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, vcs-type=git, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible) Dec 2 04:08:24 localhost podman[109156]: 2025-12-02 09:08:24.902233801 +0000 UTC m=+0.124923811 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, version=17.1.12, build-date=2025-11-19T00:35:22Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt) Dec 2 04:08:24 localhost podman[109156]: nova_virtlogd_wrapper Dec 2 04:08:24 localhost podman[109170]: 2025-12-02 09:08:24.935125197 +0000 UTC m=+0.075501740 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, release=1761123044, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:08:25 localhost systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully. Dec 2 04:08:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b-userdata-shm.mount: Deactivated successfully. Dec 2 04:08:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42558 DF PROTO=TCP SPT=55940 DPT=9101 SEQ=2379699008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F4E620000000001030307) Dec 2 04:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20176 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F5F220000000001030307) Dec 2 04:08:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22713 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F67A10000000001030307) Dec 2 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22714 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F6BA20000000001030307) Dec 2 04:08:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22715 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F73A30000000001030307) Dec 2 04:08:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47140 DF PROTO=TCP SPT=56506 DPT=9100 SEQ=154281784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F81220000000001030307) Dec 2 04:08:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:08:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:08:40 localhost podman[109186]: 2025-12-02 09:08:40.590494749 +0000 UTC m=+0.083784403 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:08:40 localhost podman[109185]: 2025-12-02 09:08:40.638031943 +0000 UTC m=+0.134205946 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 04:08:40 localhost podman[109186]: 2025-12-02 09:08:40.660445638 +0000 UTC m=+0.153735272 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, architecture=x86_64) Dec 2 04:08:40 localhost podman[109186]: unhealthy Dec 2 04:08:40 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:40 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:08:40 localhost podman[109185]: 2025-12-02 09:08:40.677670535 +0000 UTC m=+0.173844498 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:08:40 localhost podman[109185]: unhealthy Dec 2 04:08:40 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:40 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:08:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17333 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=817660967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F8B220000000001030307) Dec 2 04:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52019 DF PROTO=TCP SPT=47254 DPT=9102 SEQ=2912015696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F97FE0000000001030307) Dec 2 04:08:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22717 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FA3220000000001030307) Dec 2 04:08:50 localhost sshd[109228]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:08:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42560 DF PROTO=TCP SPT=55940 DPT=9101 SEQ=2379699008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FAF220000000001030307) Dec 2 04:08:56 localhost sshd[109230]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:08:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4811 DF PROTO=TCP SPT=47240 DPT=9101 SEQ=2026001691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FC3A30000000001030307) Dec 2 04:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52023 DF PROTO=TCP SPT=47254 DPT=9102 SEQ=2912015696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FD5230000000001030307) Dec 2 04:09:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8208 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FDCD10000000001030307) Dec 2 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8209 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FE0E30000000001030307) Dec 2 04:09:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8210 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FE8E20000000001030307) Dec 2 04:09:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59327 DF PROTO=TCP SPT=55048 DPT=9100 SEQ=2405397969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FF7230000000001030307) Dec 2 04:09:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:09:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:09:10 localhost systemd[1]: tmp-crun.aOAflb.mount: Deactivated successfully. Dec 2 04:09:10 localhost podman[109233]: 2025-12-02 09:09:10.875208421 +0000 UTC m=+0.125948053 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public) Dec 2 04:09:10 localhost podman[109232]: 2025-12-02 09:09:10.840738447 +0000 UTC m=+0.094059308 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:09:10 localhost podman[109233]: 2025-12-02 09:09:10.915598966 +0000 UTC m=+0.166338548 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 04:09:10 localhost podman[109233]: unhealthy Dec 2 04:09:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:10 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:09:10 localhost podman[109232]: 2025-12-02 09:09:10.972523897 +0000 UTC m=+0.225844828 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64) Dec 2 04:09:10 localhost podman[109232]: unhealthy Dec 2 04:09:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:10 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14592 DF PROTO=TCP SPT=46348 DPT=9105 SEQ=3247990143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53001E20000000001030307) Dec 2 04:09:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52563 DF PROTO=TCP SPT=36662 DPT=9105 SEQ=3234899454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5300D220000000001030307) Dec 2 04:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56448 DF PROTO=TCP SPT=43798 DPT=9102 SEQ=3098469964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53019230000000001030307) Dec 2 04:09:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4813 DF PROTO=TCP SPT=47240 DPT=9101 SEQ=2026001691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53023230000000001030307) Dec 2 04:09:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15816 DF PROTO=TCP SPT=59078 DPT=9101 SEQ=200685646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53038A30000000001030307) Dec 2 04:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56450 DF PROTO=TCP SPT=43798 DPT=9102 SEQ=3098469964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53049220000000001030307) Dec 2 04:09:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28179 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53052010000000001030307) Dec 2 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28180 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53056220000000001030307) Dec 2 04:09:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28181 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5305E220000000001030307) Dec 2 04:09:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19784 DF PROTO=TCP SPT=55354 DPT=9100 SEQ=1019660958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5306B220000000001030307) Dec 2 04:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:09:41 localhost systemd[1]: tmp-crun.yAJmhh.mount: Deactivated successfully. Dec 2 04:09:41 localhost podman[109399]: 2025-12-02 09:09:41.073923271 +0000 UTC m=+0.073548191 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 04:09:41 localhost podman[109400]: 2025-12-02 09:09:41.132945226 +0000 UTC m=+0.126851230 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, release=1761123044, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:09:41 localhost podman[109399]: 2025-12-02 09:09:41.168758292 +0000 UTC m=+0.168383212 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64) Dec 2 04:09:41 localhost podman[109399]: unhealthy Dec 2 04:09:41 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:41 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'. Dec 2 04:09:41 localhost podman[109400]: 2025-12-02 09:09:41.224150145 +0000 UTC m=+0.218056219 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 2 04:09:41 localhost podman[109400]: unhealthy Dec 2 04:09:41 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:41 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'. Dec 2 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56247 DF PROTO=TCP SPT=54086 DPT=9105 SEQ=4183364387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53077220000000001030307) Dec 2 04:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56985 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530825E0000000001030307) Dec 2 04:09:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:09:48 localhost recover_tripleo_nova_virtqemud[109440]: 61907 Dec 2 04:09:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:09:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:09:48 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Dec 2 04:09:48 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61145 (conmon) with signal SIGKILL. Dec 2 04:09:48 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Dec 2 04:09:48 localhost systemd[1]: libpod-conmon-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope: Deactivated successfully. Dec 2 04:09:49 localhost podman[109452]: error opening file `/run/crun/fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b/status`: No such file or directory Dec 2 04:09:49 localhost podman[109441]: 2025-12-02 09:09:49.065936548 +0000 UTC m=+0.069901879 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 2 04:09:49 localhost podman[109441]: nova_virtlogd_wrapper Dec 2 04:09:49 localhost systemd[1]: tmp-crun.c1fb1P.mount: Deactivated successfully. Dec 2 04:09:49 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Dec 2 04:09:49 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Dec 2 04:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56987 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5308E630000000001030307) Dec 2 04:09:49 localhost python3.9[109545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:49 localhost systemd[1]: Reloading. Dec 2 04:09:49 localhost systemd-sysv-generator[109575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:49 localhost systemd-rc-local-generator[109571]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:50 localhost systemd[1]: Stopping nova_virtnodedevd container... Dec 2 04:09:50 localhost systemd[1]: libpod-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Deactivated successfully. Dec 2 04:09:50 localhost systemd[1]: libpod-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Consumed 1.459s CPU time. Dec 2 04:09:50 localhost podman[109586]: 2025-12-02 09:09:50.241867821 +0000 UTC m=+0.076352555 container died 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, version=17.1.12, container_name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=) Dec 2 04:09:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:50 localhost podman[109586]: 2025-12-02 09:09:50.279246275 +0000 UTC m=+0.113730959 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, container_name=nova_virtnodedevd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 04:09:50 localhost podman[109586]: nova_virtnodedevd Dec 2 04:09:50 localhost podman[109601]: 2025-12-02 09:09:50.345361847 +0000 UTC m=+0.093291384 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, release=1761123044, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible) Dec 2 04:09:50 localhost systemd[1]: libpod-conmon-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Deactivated successfully. Dec 2 04:09:50 localhost podman[109628]: error opening file `/run/crun/380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3/status`: No such file or directory Dec 2 04:09:50 localhost podman[109617]: 2025-12-02 09:09:50.433348618 +0000 UTC m=+0.056870791 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtnodedevd, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true) Dec 2 04:09:50 localhost podman[109617]: nova_virtnodedevd Dec 2 04:09:50 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Dec 2 04:09:50 localhost systemd[1]: Stopped nova_virtnodedevd container. Dec 2 04:09:51 localhost python3.9[109721]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:51 localhost systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully. Dec 2 04:09:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15818 DF PROTO=TCP SPT=59078 DPT=9101 SEQ=200685646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53099230000000001030307) Dec 2 04:09:52 localhost sshd[109723]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:09:52 localhost systemd[1]: Reloading. Dec 2 04:09:52 localhost systemd-sysv-generator[109751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:52 localhost systemd-rc-local-generator[109746]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:52 localhost systemd[1]: Stopping nova_virtproxyd container... Dec 2 04:09:52 localhost systemd[1]: libpod-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope: Deactivated successfully. Dec 2 04:09:52 localhost podman[109764]: 2025-12-02 09:09:52.630751712 +0000 UTC m=+0.061968806 container died f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:09:52 localhost systemd[1]: tmp-crun.r9vTAM.mount: Deactivated successfully. Dec 2 04:09:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:52 localhost podman[109764]: 2025-12-02 09:09:52.670498408 +0000 UTC m=+0.101715462 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 04:09:52 localhost podman[109764]: nova_virtproxyd Dec 2 04:09:52 localhost podman[109779]: 2025-12-02 09:09:52.714435072 +0000 UTC m=+0.071376504 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-nova-libvirt) Dec 2 04:09:52 localhost systemd[1]: libpod-conmon-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope: Deactivated successfully. Dec 2 04:09:52 localhost podman[109806]: error opening file `/run/crun/f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358/status`: No such file or directory Dec 2 04:09:52 localhost podman[109794]: 2025-12-02 09:09:52.82393084 +0000 UTC m=+0.071503217 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:09:52 localhost podman[109794]: nova_virtproxyd Dec 2 04:09:52 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Dec 2 04:09:52 localhost systemd[1]: Stopped nova_virtproxyd container. Dec 2 04:09:53 localhost python3.9[109899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:53 localhost systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully. Dec 2 04:09:53 localhost systemd[1]: Reloading. Dec 2 04:09:53 localhost systemd-rc-local-generator[109923]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:53 localhost systemd-sysv-generator[109929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Dec 2 04:09:53 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Dec 2 04:09:54 localhost systemd[1]: Stopping nova_virtqemud container... Dec 2 04:09:54 localhost systemd[1]: libpod-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Deactivated successfully. Dec 2 04:09:54 localhost systemd[1]: libpod-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Consumed 2.120s CPU time. Dec 2 04:09:54 localhost podman[109940]: 2025-12-02 09:09:54.105430394 +0000 UTC m=+0.083655410 container stop cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:09:54 localhost podman[109940]: 2025-12-02 09:09:54.140720813 +0000 UTC m=+0.118945839 container died cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com) Dec 2 04:09:54 localhost podman[109940]: 2025-12-02 09:09:54.181959774 +0000 UTC m=+0.160184770 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:09:54 localhost podman[109940]: nova_virtqemud Dec 2 04:09:54 localhost podman[109953]: 2025-12-02 09:09:54.20471399 +0000 UTC m=+0.081593146 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:09:54 localhost systemd[1]: libpod-conmon-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Deactivated successfully. Dec 2 04:09:54 localhost podman[109983]: error opening file `/run/crun/cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7/status`: No such file or directory Dec 2 04:09:54 localhost podman[109972]: 2025-12-02 09:09:54.317974034 +0000 UTC m=+0.076186611 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 2 04:09:54 localhost podman[109972]: nova_virtqemud Dec 2 04:09:54 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Dec 2 04:09:54 localhost systemd[1]: Stopped nova_virtqemud container. Dec 2 04:09:54 localhost systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully. Dec 2 04:09:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:55 localhost python3.9[110076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:55 localhost systemd[1]: Reloading. Dec 2 04:09:55 localhost systemd-rc-local-generator[110101]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:55 localhost systemd-sysv-generator[110106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:55 localhost systemd[1]: Starting dnf makecache... Dec 2 04:09:55 localhost dnf[110114]: Updating Subscription Management repositories. Dec 2 04:09:56 localhost python3.9[110206]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:56 localhost systemd[1]: Reloading. Dec 2 04:09:56 localhost systemd-rc-local-generator[110231]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:56 localhost systemd-sysv-generator[110235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:56 localhost systemd[1]: Stopping nova_virtsecretd container... Dec 2 04:09:56 localhost systemd[1]: libpod-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope: Deactivated successfully. Dec 2 04:09:56 localhost podman[110246]: 2025-12-02 09:09:56.572010941 +0000 UTC m=+0.060868643 container died c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtsecretd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:09:56 localhost podman[110246]: 2025-12-02 09:09:56.618042148 +0000 UTC m=+0.106899860 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 04:09:56 localhost podman[110246]: nova_virtsecretd Dec 2 04:09:56 localhost podman[110259]: 2025-12-02 09:09:56.650420519 +0000 UTC m=+0.069352482 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044) Dec 2 04:09:56 localhost systemd[1]: libpod-conmon-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope: Deactivated successfully. Dec 2 04:09:56 localhost podman[110285]: error opening file `/run/crun/c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10/status`: No such file or directory Dec 2 04:09:56 localhost podman[110276]: 2025-12-02 09:09:56.770079759 +0000 UTC m=+0.089205669 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, container_name=nova_virtsecretd, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 2 04:09:56 localhost podman[110276]: nova_virtsecretd Dec 2 04:09:56 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Dec 2 04:09:56 localhost systemd[1]: Stopped nova_virtsecretd container. Dec 2 04:09:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30926 DF PROTO=TCP SPT=44932 DPT=9101 SEQ=154037446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530ADE20000000001030307) Dec 2 04:09:57 localhost dnf[110114]: Metadata cache refreshed recently. Dec 2 04:09:57 localhost python3.9[110379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:57 localhost systemd[1]: Reloading. Dec 2 04:09:57 localhost systemd-rc-local-generator[110408]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:57 localhost systemd-sysv-generator[110411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:57 localhost systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully. Dec 2 04:09:57 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 04:09:57 localhost systemd[1]: Finished dnf makecache. Dec 2 04:09:57 localhost systemd[1]: dnf-makecache.service: Consumed 2.040s CPU time. Dec 2 04:09:57 localhost systemd[1]: Stopping nova_virtstoraged container... Dec 2 04:09:57 localhost systemd[1]: libpod-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope: Deactivated successfully. Dec 2 04:09:57 localhost podman[110419]: 2025-12-02 09:09:57.90697398 +0000 UTC m=+0.081427893 container died f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 04:09:57 localhost podman[110419]: 2025-12-02 09:09:57.942949059 +0000 UTC m=+0.117402992 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtstoraged, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 04:09:57 localhost podman[110419]: nova_virtstoraged Dec 2 04:09:57 localhost podman[110433]: 2025-12-02 09:09:57.983753847 +0000 UTC m=+0.065333108 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:09:58 localhost systemd[1]: libpod-conmon-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope: Deactivated successfully. Dec 2 04:09:58 localhost podman[110459]: error opening file `/run/crun/f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379/status`: No such file or directory Dec 2 04:09:58 localhost podman[110448]: 2025-12-02 09:09:58.079527216 +0000 UTC m=+0.064544895 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64) Dec 2 04:09:58 localhost podman[110448]: nova_virtstoraged Dec 2 04:09:58 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Dec 2 04:09:58 localhost systemd[1]: Stopped nova_virtstoraged container. Dec 2 04:09:58 localhost python3.9[110554]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:58 localhost systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully. Dec 2 04:09:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:58 localhost systemd[1]: Reloading. Dec 2 04:09:58 localhost systemd-rc-local-generator[110581]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:58 localhost systemd-sysv-generator[110587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:59 localhost systemd[1]: Stopping ovn_controller container... Dec 2 04:09:59 localhost systemd[1]: tmp-crun.DOW8Ub.mount: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: libpod-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: libpod-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Consumed 2.586s CPU time. Dec 2 04:09:59 localhost podman[110595]: 2025-12-02 09:09:59.21879933 +0000 UTC m=+0.081069911 container died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d. Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory Dec 2 04:09:59 localhost podman[110595]: 2025-12-02 09:09:59.259420192 +0000 UTC m=+0.121690773 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:09:59 localhost podman[110595]: ovn_controller Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: No such file or directory Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory Dec 2 04:09:59 localhost podman[110608]: 2025-12-02 09:09:59.312816625 +0000 UTC m=+0.083020560 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=ovn_controller, vcs-type=git, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, version=17.1.12) Dec 2 04:09:59 localhost systemd[1]: libpod-conmon-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: No such file or directory Dec 2 04:09:59 localhost systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory Dec 2 04:09:59 localhost podman[110624]: 2025-12-02 09:09:59.388056966 +0000 UTC m=+0.050049032 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:09:59 localhost podman[110624]: ovn_controller Dec 2 04:09:59 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: Stopped ovn_controller container. Dec 2 04:09:59 localhost systemd[1]: var-lib-containers-storage-overlay-8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337-merged.mount: Deactivated successfully. Dec 2 04:09:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d-userdata-shm.mount: Deactivated successfully. Dec 2 04:10:00 localhost python3.9[110727]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:10:00 localhost systemd[1]: Reloading. Dec 2 04:10:00 localhost systemd-rc-local-generator[110751]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:10:00 localhost systemd-sysv-generator[110757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:10:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:10:00 localhost systemd[1]: Stopping ovn_metadata_agent container... Dec 2 04:10:01 localhost systemd[1]: libpod-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Deactivated successfully. Dec 2 04:10:01 localhost systemd[1]: libpod-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Consumed 9.520s CPU time. Dec 2 04:10:01 localhost podman[110769]: 2025-12-02 09:10:01.313161603 +0000 UTC m=+0.814524272 container died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Deactivated successfully. Dec 2 04:10:01 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b. Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory Dec 2 04:10:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b-userdata-shm.mount: Deactivated successfully. Dec 2 04:10:01 localhost systemd[1]: var-lib-containers-storage-overlay-a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a-merged.mount: Deactivated successfully. Dec 2 04:10:01 localhost podman[110769]: 2025-12-02 09:10:01.371386564 +0000 UTC m=+0.872749213 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:10:01 localhost podman[110769]: ovn_metadata_agent Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: No such file or directory Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory Dec 2 04:10:01 localhost podman[110783]: 2025-12-02 09:10:01.409886982 +0000 UTC m=+0.083822055 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=) Dec 2 04:10:01 localhost systemd[1]: libpod-conmon-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Deactivated successfully. Dec 2 04:10:01 localhost podman[110811]: error opening file `/run/crun/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b/status`: No such file or directory Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: No such file or directory Dec 2 04:10:01 localhost systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory Dec 2 04:10:01 localhost podman[110799]: 2025-12-02 09:10:01.513613474 +0000 UTC m=+0.071023943 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:10:01 localhost podman[110799]: ovn_metadata_agent Dec 2 04:10:01 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Dec 2 04:10:01 localhost systemd[1]: Stopped ovn_metadata_agent container. Dec 2 04:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56989 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530BF220000000001030307) Dec 2 04:10:02 localhost python3.9[110904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:10:02 localhost systemd[1]: Reloading. Dec 2 04:10:02 localhost systemd-rc-local-generator[110930]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:10:02 localhost systemd-sysv-generator[110933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:10:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:10:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12965 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530C7300000000001030307) Dec 2 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12966 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530CB230000000001030307) Dec 2 04:10:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12967 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530D3230000000001030307) Dec 2 04:10:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26792 DF PROTO=TCP SPT=46334 DPT=9100 SEQ=4060823706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530E1220000000001030307) Dec 2 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52667 DF PROTO=TCP SPT=38722 DPT=9105 SEQ=4171385029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530EC620000000001030307) Dec 2 04:10:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14597 DF PROTO=TCP SPT=46348 DPT=9105 SEQ=3247990143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530F7220000000001030307) Dec 2 04:10:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12969 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53103220000000001030307) Dec 2 04:10:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30928 DF PROTO=TCP SPT=44932 DPT=9101 SEQ=154037446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5310F220000000001030307) Dec 2 04:10:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37505 DF PROTO=TCP SPT=59904 DPT=9101 SEQ=1754290358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53123220000000001030307) Dec 2 04:10:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52670 DF PROTO=TCP SPT=38722 DPT=9105 SEQ=4171385029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53125230000000001030307) Dec 2 04:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34520 DF PROTO=TCP SPT=47500 DPT=9102 SEQ=757529416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53133220000000001030307) Dec 2 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9598 DF PROTO=TCP SPT=37754 DPT=9882 SEQ=1079819386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53140620000000001030307) Dec 2 04:10:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9599 DF PROTO=TCP SPT=37754 DPT=9882 SEQ=1079819386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53148620000000001030307) Dec 2 04:10:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36885 DF PROTO=TCP SPT=53100 DPT=9100 SEQ=2961383613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53157220000000001030307) Dec 2 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32005 DF PROTO=TCP SPT=54762 DPT=9105 SEQ=3236264239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53161620000000001030307) Dec 2 04:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7552 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5316CBE0000000001030307) Dec 2 04:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7554 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53178E30000000001030307) Dec 2 04:10:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37507 DF PROTO=TCP SPT=59904 DPT=9101 SEQ=1754290358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53183220000000001030307) Dec 2 04:10:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43170 DF PROTO=TCP SPT=54284 DPT=9101 SEQ=529320762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53198620000000001030307) Dec 2 04:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7556 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531A9230000000001030307) Dec 2 04:11:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63290 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531B1900000000001030307) Dec 2 04:11:03 localhost sshd[111034]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63291 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531B5A20000000001030307) Dec 2 04:11:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63292 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531BDA20000000001030307) Dec 2 04:11:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34842 DF PROTO=TCP SPT=59378 DPT=9100 SEQ=654284128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531CB220000000001030307) Dec 2 04:11:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36886 DF PROTO=TCP SPT=53100 DPT=9100 SEQ=2961383613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531D5230000000001030307) Dec 2 04:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44201 DF PROTO=TCP SPT=51550 DPT=9102 SEQ=2399346664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531E1EF0000000001030307) Dec 2 04:11:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63294 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531ED220000000001030307) Dec 2 04:11:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43172 DF PROTO=TCP SPT=54284 DPT=9101 SEQ=529320762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531F99F0000000001030307) Dec 2 04:11:22 localhost systemd[1]: session-36.scope: Deactivated successfully. Dec 2 04:11:22 localhost systemd[1]: session-36.scope: Consumed 18.508s CPU time. Dec 2 04:11:22 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Dec 2 04:11:22 localhost systemd-logind[760]: Removed session 36. Dec 2 04:11:24 localhost systemd[1]: tmp-crun.6NbGsy.mount: Deactivated successfully. Dec 2 04:11:24 localhost podman[111140]: 2025-12-02 09:11:24.503288539 +0000 UTC m=+0.076277338 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1763362218, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:11:24 localhost podman[111140]: 2025-12-02 09:11:24.601751976 +0000 UTC m=+0.174740765 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218) Dec 2 04:11:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1928 DF PROTO=TCP SPT=54738 DPT=9101 SEQ=89299610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5320D620000000001030307) Dec 2 04:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44205 DF PROTO=TCP SPT=51550 DPT=9102 SEQ=2399346664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5321D220000000001030307) Dec 2 04:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23642 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53226C10000000001030307) Dec 2 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23643 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5322AE20000000001030307) Dec 2 04:11:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23644 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53232E20000000001030307) Dec 2 04:11:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42322 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=4098914354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53241220000000001030307) Dec 2 04:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4070 DF PROTO=TCP SPT=57934 DPT=9105 SEQ=2366396435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5324BE20000000001030307) Dec 2 04:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25777 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532571E0000000001030307) Dec 2 04:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25779 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53263230000000001030307) Dec 2 04:11:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1930 DF PROTO=TCP SPT=54738 DPT=9101 SEQ=89299610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5326D220000000001030307) Dec 2 04:11:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40830 DF PROTO=TCP SPT=35152 DPT=9101 SEQ=2088900458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53282A30000000001030307) Dec 2 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25781 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53293230000000001030307) Dec 2 04:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37029 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5329BF10000000001030307) Dec 2 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37030 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5329FE30000000001030307) Dec 2 04:12:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37031 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532A7E20000000001030307) Dec 2 04:12:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37709 DF PROTO=TCP SPT=60128 DPT=9100 SEQ=1773251718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532B5220000000001030307) Dec 2 04:12:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42323 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=4098914354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532BF220000000001030307) Dec 2 04:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50118 DF PROTO=TCP SPT=35034 DPT=9102 SEQ=986971140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532CC4F0000000001030307) Dec 2 04:12:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37033 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532D7230000000001030307) Dec 2 04:12:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40832 DF PROTO=TCP SPT=35152 DPT=9101 SEQ=2088900458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532E3220000000001030307) Dec 2 04:12:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23663 DF PROTO=TCP SPT=48094 DPT=9101 SEQ=3243340766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532F7E20000000001030307) Dec 2 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50122 DF PROTO=TCP SPT=35034 DPT=9102 SEQ=986971140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53309220000000001030307) Dec 2 04:12:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=714 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53311210000000001030307) Dec 2 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=715 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53315220000000001030307) Dec 2 04:12:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=716 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5331D220000000001030307) Dec 2 04:12:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56104 DF PROTO=TCP SPT=42270 DPT=9100 SEQ=3870687681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5332B220000000001030307) Dec 2 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64517 DF PROTO=TCP SPT=59300 DPT=9105 SEQ=3427386491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53336220000000001030307) Dec 2 04:12:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4075 DF PROTO=TCP SPT=57934 DPT=9105 SEQ=2366396435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53341220000000001030307) Dec 2 04:12:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=718 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5334D220000000001030307) Dec 2 04:12:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23665 DF PROTO=TCP SPT=48094 DPT=9101 SEQ=3243340766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53359220000000001030307) Dec 2 04:12:52 localhost sshd[111362]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:12:52 localhost systemd-logind[760]: New session 37 of user zuul. Dec 2 04:12:52 localhost systemd[1]: Started Session 37 of User zuul. Dec 2 04:12:52 localhost python3.9[111443]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:53 localhost python3.9[111535]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:53 localhost python3.9[111627]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:54 localhost python3.9[111719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:54 localhost python3.9[111811]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:55 localhost python3.9[111903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:55 localhost python3.9[111995]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:56 localhost python3.9[112087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:57 localhost python3.9[112179]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33 DF PROTO=TCP SPT=40840 DPT=9101 SEQ=1232125286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5336D230000000001030307) Dec 2 04:12:57 localhost python3.9[112271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64520 DF PROTO=TCP SPT=59300 DPT=9105 SEQ=3427386491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5336F220000000001030307) Dec 2 04:12:58 localhost python3.9[112363]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:58 localhost python3.9[112455]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:59 localhost python3.9[112547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:59 localhost python3.9[112639]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:00 localhost python3.9[112731]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:00 localhost python3.9[112823]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61830 DF PROTO=TCP SPT=58996 DPT=9102 SEQ=2908849782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5337D220000000001030307) Dec 2 04:13:01 localhost python3.9[112915]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:02 localhost python3.9[113007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:02 localhost python3.9[113099]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:03 localhost python3.9[113191]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:03 localhost python3.9[113283]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59566 DF PROTO=TCP SPT=56196 DPT=9882 SEQ=1383719902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5338A630000000001030307) Dec 2 04:13:05 localhost python3.9[113375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:05 localhost python3.9[113467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:06 localhost python3.9[113559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59567 DF PROTO=TCP SPT=56196 DPT=9882 SEQ=1383719902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53392620000000001030307) Dec 2 04:13:06 localhost python3.9[113651]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:07 localhost python3.9[113743]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:07 localhost python3.9[113835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:08 localhost python3.9[113927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:08 localhost python3.9[114019]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:09 localhost python3.9[114111]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62276 DF PROTO=TCP SPT=36024 DPT=9100 SEQ=2944991294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5339F220000000001030307) Dec 2 04:13:10 localhost python3.9[114203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:10 localhost python3.9[114295]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:11 localhost python3.9[114387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:11 localhost python3.9[114479]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:12 localhost python3.9[114571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:12 localhost sshd[114592]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:13:12 localhost python3.9[114665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45637 DF PROTO=TCP SPT=56896 DPT=9105 SEQ=4178981333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533AB620000000001030307) Dec 2 04:13:13 localhost python3.9[114757]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:13 localhost python3.9[114849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:14 localhost python3.9[114941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:14 localhost python3.9[115033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:15 localhost python3.9[115125]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:15 localhost python3.9[115217]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49685 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533B6AE0000000001030307) Dec 2 04:13:17 localhost python3.9[115309]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:17 localhost python3.9[115401]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:13:18 localhost python3.9[115493]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:13:18 localhost systemd[1]: Reloading. Dec 2 04:13:18 localhost systemd-rc-local-generator[115515]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:13:18 localhost systemd-sysv-generator[115521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:13:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49687 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533C2A20000000001030307) Dec 2 04:13:19 localhost python3.9[115621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:20 localhost python3.9[115714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35 DF PROTO=TCP SPT=40840 DPT=9101 SEQ=1232125286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533CD220000000001030307) Dec 2 04:13:21 localhost python3.9[115807]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:22 localhost python3.9[115900]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:23 localhost python3.9[115993]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:23 localhost python3.9[116086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:24 localhost python3.9[116179]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:24 localhost python3.9[116272]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:25 localhost python3.9[116365]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:25 localhost python3.9[116458]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:26 localhost python3.9[116551]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:26 localhost python3.9[116644]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65491 DF PROTO=TCP SPT=36098 DPT=9101 SEQ=515077669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533E2220000000001030307) Dec 2 04:13:27 localhost python3.9[116737]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:28 localhost python3.9[116831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:28 localhost python3.9[116954]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:29 localhost python3.9[117080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:30 localhost python3.9[117173]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:30 localhost python3.9[117281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:31 localhost python3.9[117374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49689 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533F3230000000001030307) Dec 2 04:13:31 localhost python3.9[117467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:32 localhost python3.9[117560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:32 localhost systemd[1]: session-37.scope: Deactivated successfully. Dec 2 04:13:32 localhost systemd[1]: session-37.scope: Consumed 29.633s CPU time. Dec 2 04:13:32 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Dec 2 04:13:32 localhost systemd-logind[760]: Removed session 37. Dec 2 04:13:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57879 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533FB800000000001030307) Dec 2 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57880 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533FFA20000000001030307) Dec 2 04:13:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57881 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53407A20000000001030307) Dec 2 04:13:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12817 DF PROTO=TCP SPT=50014 DPT=9100 SEQ=2942203590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53415230000000001030307) Dec 2 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32033 DF PROTO=TCP SPT=49552 DPT=9105 SEQ=1345071773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53420A20000000001030307) Dec 2 04:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26541 DF PROTO=TCP SPT=56402 DPT=9102 SEQ=1972207252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5342BDE0000000001030307) Dec 2 04:13:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57883 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53437220000000001030307) Dec 2 04:13:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65493 DF PROTO=TCP SPT=36098 DPT=9101 SEQ=515077669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53443230000000001030307) Dec 2 04:13:52 localhost sshd[117576]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:13:52 localhost systemd-logind[760]: New session 38 of user zuul. Dec 2 04:13:52 localhost systemd[1]: Started Session 38 of User zuul. Dec 2 04:13:52 localhost python3.9[117669]: ansible-ansible.legacy.ping Invoked with data=pong Dec 2 04:13:54 localhost python3.9[117773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:13:54 localhost python3.9[117865]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:55 localhost python3.9[117958]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:13:56 localhost python3.9[118050]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5137 DF PROTO=TCP SPT=34382 DPT=9101 SEQ=272883982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53457630000000001030307) Dec 2 04:13:57 localhost python3.9[118142]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:13:58 localhost python3.9[118215]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666836.910215-180-74293053735131/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:58 localhost python3.9[118307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:13:59 localhost python3.9[118403]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:14:00 localhost python3.9[118495]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26545 DF PROTO=TCP SPT=56402 DPT=9102 SEQ=1972207252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53467220000000001030307) Dec 2 04:14:01 localhost python3.9[118585]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:14:01 localhost network[118602]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:14:01 localhost network[118603]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:14:01 localhost network[118604]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:14:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59198 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53470B10000000001030307) Dec 2 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59199 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53474A20000000001030307) Dec 2 04:14:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:14:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59200 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5347CA20000000001030307) Dec 2 04:14:08 localhost python3.9[118802]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:14:08 localhost python3.9[118892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:14:09 localhost python3.9[118988]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:14:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28643 DF PROTO=TCP SPT=51150 DPT=9100 SEQ=3538816596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5348B220000000001030307) Dec 2 04:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19967 DF PROTO=TCP SPT=35512 DPT=9105 SEQ=3679674891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53495E20000000001030307) Dec 2 04:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51621 DF PROTO=TCP SPT=37264 DPT=9102 SEQ=2236483668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534A10E0000000001030307) Dec 2 04:14:18 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:14:18 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:14:18 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:14:18 localhost systemd[1]: sshd.service: Consumed 3.693s CPU time. Dec 2 04:14:18 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:14:18 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:14:18 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:18 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:18 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:18 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:14:19 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:14:19 localhost sshd[119031]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:14:19 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59202 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534AD220000000001030307) Dec 2 04:14:19 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:14:19 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:14:19 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:14:19 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:14:19 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:14:19 localhost systemd[1]: run-r80ad94e07750407e8a9c5b852d2bd790.service: Deactivated successfully. Dec 2 04:14:19 localhost systemd[1]: run-r98bd105f013d45dca59e584de63389eb.service: Deactivated successfully. Dec 2 04:14:20 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:14:20 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:14:20 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:14:20 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:14:20 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:14:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:14:20 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:14:20 localhost sshd[119204]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:14:20 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:14:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5139 DF PROTO=TCP SPT=34382 DPT=9101 SEQ=272883982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534B7220000000001030307) Dec 2 04:14:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21660 DF PROTO=TCP SPT=43870 DPT=9101 SEQ=3299707963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534CCA20000000001030307) Dec 2 04:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51625 DF PROTO=TCP SPT=37264 DPT=9102 SEQ=2236483668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534DD220000000001030307) Dec 2 04:14:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57603 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534E5E00000000001030307) Dec 2 04:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57604 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534E9E20000000001030307) Dec 2 04:14:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57605 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534F1E20000000001030307) Dec 2 04:14:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38222 DF PROTO=TCP SPT=40570 DPT=9100 SEQ=1002037188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534FF230000000001030307) Dec 2 04:14:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28644 DF PROTO=TCP SPT=51150 DPT=9100 SEQ=3538816596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53509220000000001030307) Dec 2 04:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61241 DF PROTO=TCP SPT=39538 DPT=9102 SEQ=1073381646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535163F0000000001030307) Dec 2 04:14:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57607 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53521220000000001030307) Dec 2 04:14:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21662 DF PROTO=TCP SPT=43870 DPT=9101 SEQ=3299707963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5352D220000000001030307) Dec 2 04:14:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33201 DF PROTO=TCP SPT=51146 DPT=9101 SEQ=761795435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53541E30000000001030307) Dec 2 04:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61245 DF PROTO=TCP SPT=39538 DPT=9102 SEQ=1073381646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53553230000000001030307) Dec 2 04:15:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10311 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5355B110000000001030307) Dec 2 04:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10312 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5355F220000000001030307) Dec 2 04:15:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10313 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53567220000000001030307) Dec 2 04:15:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27259 DF PROTO=TCP SPT=51864 DPT=9100 SEQ=2946643447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53575220000000001030307) Dec 2 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12982 DF PROTO=TCP SPT=59050 DPT=9105 SEQ=1692665145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53580220000000001030307) Dec 2 04:15:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19972 DF PROTO=TCP SPT=35512 DPT=9105 SEQ=3679674891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5358B220000000001030307) Dec 2 04:15:16 localhost sshd[119613]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10315 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53597220000000001030307) Dec 2 04:15:19 localhost sshd[119615]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33203 DF PROTO=TCP SPT=51146 DPT=9101 SEQ=761795435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535A3230000000001030307) Dec 2 04:15:25 localhost sshd[119649]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47195 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=2147832935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535B6E20000000001030307) Dec 2 04:15:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12985 DF PROTO=TCP SPT=59050 DPT=9105 SEQ=1692665145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535B9220000000001030307) Dec 2 04:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13493 DF PROTO=TCP SPT=34252 DPT=9102 SEQ=2253941747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535C7220000000001030307) Dec 2 04:15:31 localhost kernel: SELinux: Converting 2741 SID table entries... Dec 2 04:15:31 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:15:31 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:15:31 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:15:31 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:15:31 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:15:31 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:15:31 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:15:31 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Dec 2 04:15:34 localhost python3.9[119910]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64936 DF PROTO=TCP SPT=40650 DPT=9882 SEQ=425084901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535D4620000000001030307) Dec 2 04:15:34 localhost python3.9[120002]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:15:35 localhost python3.9[120075]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666934.4324012-428-122359247689998/.source.fact _original_basename=.ruw2xycs follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:36 localhost python3.9[120165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:15:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64937 DF PROTO=TCP SPT=40650 DPT=9882 SEQ=425084901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535DC620000000001030307) Dec 2 04:15:37 localhost python3.9[120278]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:15:38 localhost python3.9[120332]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47519 DF PROTO=TCP SPT=44026 DPT=9100 SEQ=2383907933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535E9230000000001030307) Dec 2 04:15:41 localhost systemd[1]: Reloading. Dec 2 04:15:41 localhost systemd-sysv-generator[120370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:15:41 localhost systemd-rc-local-generator[120366]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:15:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:15:41 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3081 DF PROTO=TCP SPT=48796 DPT=9105 SEQ=903693248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535F5630000000001030307) Dec 2 04:15:44 localhost python3.9[120471]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51851 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536009E0000000001030307) Dec 2 04:15:46 localhost python3.9[120710]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Dec 2 04:15:47 localhost python3.9[120802]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Dec 2 04:15:48 localhost python3.9[120895]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51853 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5360CA30000000001030307) Dec 2 04:15:49 localhost python3.9[120987]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Dec 2 04:15:50 localhost python3.9[121079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:15:51 localhost python3.9[121171]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:15:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47197 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=2147832935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53617230000000001030307) Dec 2 04:15:52 localhost python3.9[121244]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764666951.224332-753-158469701170302/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:53 localhost python3.9[121336]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:15:54 localhost python3.9[121430]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Dec 2 04:15:55 localhost python3.9[121523]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Dec 2 04:15:56 localhost python3.9[121616]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:15:57 localhost python3.9[121714]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Dec 2 04:15:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20497 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=657770300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5362C220000000001030307) Dec 2 04:15:58 localhost python3.9[121806]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51855 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5363D230000000001030307) Dec 2 04:16:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23601 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53645710000000001030307) Dec 2 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23602 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53649630000000001030307) Dec 2 04:16:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23603 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53651620000000001030307) Dec 2 04:16:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23080 DF PROTO=TCP SPT=56834 DPT=9100 SEQ=4047028326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5365F220000000001030307) Dec 2 04:16:11 localhost python3.9[121900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15657 DF PROTO=TCP SPT=47886 DPT=9105 SEQ=1501209150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5366AA30000000001030307) Dec 2 04:16:13 localhost python3.9[121992]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:16:14 localhost python3.9[122065]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666972.128122-1026-229181094482736/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:15 localhost python3.9[122157]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:16:15 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:16:15 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:16:15 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:16:15 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:16:15 localhost systemd-modules-load[122161]: Module 'msr' is built in Dec 2 04:16:15 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5675 DF PROTO=TCP SPT=49346 DPT=9102 SEQ=2874799217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53675CE0000000001030307) Dec 2 04:16:16 localhost python3.9[122254]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:16:16 localhost python3.9[122327]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666975.7022333-1095-208242625907491/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:17 localhost python3.9[122419]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:16:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23605 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53681220000000001030307) Dec 2 04:16:21 localhost python3.9[122511]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:16:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20499 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=657770300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5368D220000000001030307) Dec 2 04:16:23 localhost python3.9[122603]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 2 04:16:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51826 DF PROTO=TCP SPT=39712 DPT=9101 SEQ=3500661361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536A1620000000001030307) Dec 2 04:16:27 localhost python3.9[122693]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:16:28 localhost python3.9[122785]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:28 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 04:16:28 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 04:16:28 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 04:16:28 localhost systemd[1]: tuned.service: Consumed 1.830s CPU time, no IO. Dec 2 04:16:28 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 04:16:29 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5679 DF PROTO=TCP SPT=49346 DPT=9102 SEQ=2874799217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536B1220000000001030307) Dec 2 04:16:32 localhost python3.9[122888]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 2 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2757 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536BAA10000000001030307) Dec 2 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2758 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536BEA20000000001030307) Dec 2 04:16:35 localhost python3.9[122980]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2759 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536C6A20000000001030307) Dec 2 04:16:36 localhost systemd[1]: Reloading. Dec 2 04:16:36 localhost systemd-sysv-generator[123054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:16:36 localhost systemd-rc-local-generator[123050]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:16:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:16:37 localhost python3.9[123201]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:37 localhost systemd[1]: Reloading. Dec 2 04:16:37 localhost systemd-rc-local-generator[123248]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:16:37 localhost systemd-sysv-generator[123251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:16:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:16:38 localhost python3.9[123351]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:39 localhost python3.9[123444]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:39 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Dec 2 04:16:40 localhost python3.9[123537]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56357 DF PROTO=TCP SPT=37398 DPT=9100 SEQ=1149734740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536D5230000000001030307) Dec 2 04:16:42 localhost python3.9[123651]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38798 DF PROTO=TCP SPT=51132 DPT=9105 SEQ=2584331989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536DFA20000000001030307) Dec 2 04:16:44 localhost python3.9[123744]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:16:44 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 04:16:44 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 04:16:44 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 2 04:16:44 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 04:16:44 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 04:16:44 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 04:16:45 localhost systemd[1]: session-38.scope: Deactivated successfully. Dec 2 04:16:45 localhost systemd[1]: session-38.scope: Consumed 1min 57.032s CPU time. Dec 2 04:16:45 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Dec 2 04:16:45 localhost systemd-logind[760]: Removed session 38. Dec 2 04:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65498 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536EAFF0000000001030307) Dec 2 04:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65500 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536F7220000000001030307) Dec 2 04:16:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51828 DF PROTO=TCP SPT=39712 DPT=9101 SEQ=3500661361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53701220000000001030307) Dec 2 04:16:52 localhost sshd[123765]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:16:52 localhost systemd-logind[760]: New session 39 of user zuul. Dec 2 04:16:52 localhost systemd[1]: Started Session 39 of User zuul. Dec 2 04:16:53 localhost python3.9[123858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:55 localhost python3.9[123952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:56 localhost python3.9[124048]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12405 DF PROTO=TCP SPT=36480 DPT=9101 SEQ=1553097679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53716A20000000001030307) Dec 2 04:16:57 localhost python3.9[124139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:58 localhost python3.9[124235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:16:59 localhost python3.9[124289]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65502 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53727220000000001030307) Dec 2 04:17:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16826 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5372FD00000000001030307) Dec 2 04:17:03 localhost python3.9[124383]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16827 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53733E20000000001030307) Dec 2 04:17:05 localhost python3.9[124530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:06 localhost python3.9[124622]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:17:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16828 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5373BE20000000001030307) Dec 2 04:17:07 localhost python3.9[124725]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:07 localhost python3.9[124773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:08 localhost python3.9[124865]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:09 localhost python3.9[124938]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667028.164889-325-170069321373359/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59967 DF PROTO=TCP SPT=41984 DPT=9100 SEQ=1476989852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53749230000000001030307) Dec 2 04:17:10 localhost python3.9[125030]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:10 localhost python3.9[125122]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:17:11 localhost python3.9[125214]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:12 localhost python3.9[125306]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56358 DF PROTO=TCP SPT=37398 DPT=9100 SEQ=1149734740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53753220000000001030307) Dec 2 04:17:13 localhost python3.9[125396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:17:13 localhost python3.9[125490]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31048 DF PROTO=TCP SPT=60386 DPT=9102 SEQ=1629376715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53760320000000001030307) Dec 2 04:17:17 localhost python3.9[125584]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16830 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5376B220000000001030307) Dec 2 04:17:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12407 DF PROTO=TCP SPT=36480 DPT=9101 SEQ=1553097679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53777220000000001030307) Dec 2 04:17:22 localhost python3.9[125678]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:26 localhost python3.9[125778]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30372 DF PROTO=TCP SPT=52198 DPT=9101 SEQ=1518831331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5378BA30000000001030307) Dec 2 04:17:30 localhost python3.9[125872]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31052 DF PROTO=TCP SPT=60386 DPT=9102 SEQ=1629376715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5379D220000000001030307) Dec 2 04:17:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54346 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537A5000000000001030307) Dec 2 04:17:34 localhost python3.9[125966]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54347 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537A9220000000001030307) Dec 2 04:17:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54348 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537B1220000000001030307) Dec 2 04:17:37 localhost sshd[125983]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:17:38 localhost python3.9[126062]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23655 DF PROTO=TCP SPT=46322 DPT=9100 SEQ=702705207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537BF220000000001030307) Dec 2 04:17:42 localhost podman[126202]: Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.527118089 +0000 UTC m=+0.078106974 container create 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public) Dec 2 04:17:42 localhost systemd[1]: Started libpod-conmon-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope. Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.483942005 +0000 UTC m=+0.034930940 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:17:42 localhost systemd[1]: Started libcrun container. Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.599744174 +0000 UTC m=+0.150733039 container init 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.611920277 +0000 UTC m=+0.162909132 container start 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, vcs-type=git) Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.61204096 +0000 UTC m=+0.163029815 container attach 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, release=1763362218, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Dec 2 04:17:42 localhost wizardly_curie[126218]: 167 167 Dec 2 04:17:42 localhost systemd[1]: libpod-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope: Deactivated successfully. Dec 2 04:17:42 localhost podman[126202]: 2025-12-02 09:17:42.615484226 +0000 UTC m=+0.166473101 container died 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph) Dec 2 04:17:42 localhost podman[126223]: 2025-12-02 09:17:42.710077734 +0000 UTC m=+0.086530252 container remove 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Dec 2 04:17:42 localhost systemd[1]: libpod-conmon-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope: Deactivated successfully. Dec 2 04:17:42 localhost podman[126244]: Dec 2 04:17:42 localhost podman[126244]: 2025-12-02 09:17:42.881747963 +0000 UTC m=+0.054905472 container create b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=) Dec 2 04:17:42 localhost systemd[1]: Started libpod-conmon-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope. Dec 2 04:17:42 localhost systemd[1]: Started libcrun container. Dec 2 04:17:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:42 localhost podman[126244]: 2025-12-02 09:17:42.942268828 +0000 UTC m=+0.115426337 container init b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 2 04:17:42 localhost podman[126244]: 2025-12-02 09:17:42.953074579 +0000 UTC m=+0.126232088 container start b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container) Dec 2 04:17:42 localhost podman[126244]: 2025-12-02 09:17:42.953503262 +0000 UTC m=+0.126660811 container attach b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, release=1763362218, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 2 04:17:42 localhost podman[126244]: 2025-12-02 09:17:42.857726788 +0000 UTC m=+0.030884287 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:17:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44300 DF PROTO=TCP SPT=41902 DPT=9105 SEQ=642120871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537CA230000000001030307) Dec 2 04:17:43 localhost systemd[1]: tmp-crun.wMEOTZ.mount: Deactivated successfully. Dec 2 04:17:43 localhost systemd[1]: var-lib-containers-storage-overlay-dc91164455d8c3f143df14bb7242840af61b8f13dcda1adc54792e6afe87cac6-merged.mount: Deactivated successfully. Dec 2 04:17:43 localhost hardcore_swirles[126259]: [ Dec 2 04:17:43 localhost hardcore_swirles[126259]: { Dec 2 04:17:43 localhost hardcore_swirles[126259]: "available": false, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "ceph_device": false, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "lsm_data": {}, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "lvs": [], Dec 2 04:17:43 localhost hardcore_swirles[126259]: "path": "/dev/sr0", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "rejected_reasons": [ Dec 2 04:17:43 localhost hardcore_swirles[126259]: "Has a FileSystem", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "Insufficient space (<5GB)" Dec 2 04:17:43 localhost hardcore_swirles[126259]: ], Dec 2 04:17:43 localhost hardcore_swirles[126259]: "sys_api": { Dec 2 04:17:43 localhost hardcore_swirles[126259]: "actuators": null, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "device_nodes": "sr0", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "human_readable_size": "482.00 KB", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "id_bus": "ata", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "model": "QEMU DVD-ROM", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "nr_requests": "2", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "partitions": {}, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "path": "/dev/sr0", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "removable": "1", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "rev": "2.5+", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "ro": "0", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "rotational": "1", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "sas_address": "", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "sas_device_handle": "", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "scheduler_mode": "mq-deadline", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "sectors": 0, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "sectorsize": "2048", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "size": 493568.0, Dec 2 04:17:43 localhost hardcore_swirles[126259]: "support_discard": "0", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "type": "disk", Dec 2 04:17:43 localhost hardcore_swirles[126259]: "vendor": "QEMU" Dec 2 04:17:43 localhost hardcore_swirles[126259]: } Dec 2 04:17:43 localhost hardcore_swirles[126259]: } Dec 2 04:17:43 localhost hardcore_swirles[126259]: ] Dec 2 04:17:43 localhost systemd[1]: libpod-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope: Deactivated successfully. Dec 2 04:17:43 localhost podman[127620]: 2025-12-02 09:17:43.860392778 +0000 UTC m=+0.040236555 container died b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, architecture=x86_64, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:17:43 localhost systemd[1]: var-lib-containers-storage-overlay-21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8-merged.mount: Deactivated successfully. Dec 2 04:17:43 localhost podman[127620]: 2025-12-02 09:17:43.922101348 +0000 UTC m=+0.101945065 container remove b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, ceph=True) Dec 2 04:17:43 localhost systemd[1]: libpod-conmon-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope: Deactivated successfully. Dec 2 04:17:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38803 DF PROTO=TCP SPT=51132 DPT=9105 SEQ=2584331989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537D5230000000001030307) Dec 2 04:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54350 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537E1230000000001030307) Dec 2 04:17:50 localhost python3.9[127808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:51 localhost python3.9[127913]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:51 localhost python3.9[127986]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764667070.8170419-724-182753056520520/.source.json _original_basename=.kifw4mzc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:53 localhost python3.9[128078]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31696 DF PROTO=TCP SPT=34768 DPT=9102 SEQ=2977458884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537F1220000000001030307) Dec 2 04:17:53 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Dec 2 04:17:53 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:17:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:17:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:17:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=44254 DPT=9101 SEQ=638889751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53800E20000000001030307) Dec 2 04:17:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44303 DF PROTO=TCP SPT=41902 DPT=9105 SEQ=642120871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53803220000000001030307) Dec 2 04:17:59 localhost podman[128092]: 2025-12-02 09:17:53.188212481 +0000 UTC m=+0.052800728 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31697 DF PROTO=TCP SPT=34768 DPT=9102 SEQ=2977458884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53811220000000001030307) Dec 2 04:18:02 localhost python3.9[128293]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7702 DF PROTO=TCP SPT=60074 DPT=9882 SEQ=1240822841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5381E220000000001030307) Dec 2 04:18:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7703 DF PROTO=TCP SPT=60074 DPT=9882 SEQ=1240822841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53826230000000001030307) Dec 2 04:18:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19422 DF PROTO=TCP SPT=45502 DPT=9100 SEQ=1113676127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53833220000000001030307) Dec 2 04:18:10 localhost podman[128306]: 2025-12-02 09:18:02.838128654 +0000 UTC m=+0.043448772 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18411 DF PROTO=TCP SPT=44942 DPT=9105 SEQ=67867514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5383F630000000001030307) Dec 2 04:18:13 localhost python3.9[128505]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:14 localhost podman[128518]: 2025-12-02 09:18:13.354654668 +0000 UTC m=+0.042979387 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 2 04:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54606 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5384A8E0000000001030307) Dec 2 04:18:16 localhost python3.9[128684]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:18 localhost podman[128696]: 2025-12-02 09:18:16.437990565 +0000 UTC m=+0.040791851 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54608 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53856A20000000001030307) Dec 2 04:18:19 localhost python3.9[128863]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26093 DF PROTO=TCP SPT=44254 DPT=9101 SEQ=638889751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53861220000000001030307) Dec 2 04:18:23 localhost podman[128875]: 2025-12-02 09:18:19.299358651 +0000 UTC m=+0.042434991 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 2 04:18:24 localhost python3.9[129053]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:26 localhost podman[129067]: 2025-12-02 09:18:24.195961922 +0000 UTC m=+0.051867420 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 2 04:18:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63788 DF PROTO=TCP SPT=60670 DPT=9101 SEQ=959441373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53876220000000001030307) Dec 2 04:18:27 localhost systemd[1]: session-39.scope: Deactivated successfully. Dec 2 04:18:27 localhost systemd[1]: session-39.scope: Consumed 1min 28.632s CPU time. Dec 2 04:18:27 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Dec 2 04:18:27 localhost systemd-logind[760]: Removed session 39. Dec 2 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54610 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53887220000000001030307) Dec 2 04:18:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43813 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5388F610000000001030307) Dec 2 04:18:33 localhost sshd[129206]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:18:33 localhost systemd-logind[760]: New session 40 of user zuul. Dec 2 04:18:33 localhost systemd[1]: Started Session 40 of User zuul. Dec 2 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43814 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53893620000000001030307) Dec 2 04:18:34 localhost python3.9[129299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43815 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5389B620000000001030307) Dec 2 04:18:37 localhost python3.9[129497]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Dec 2 04:18:39 localhost python3.9[129612]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:18:40 localhost python3.9[129666]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:18:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30891 DF PROTO=TCP SPT=48796 DPT=9100 SEQ=711203781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538A9220000000001030307) Dec 2 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61966 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=2702850273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538B4620000000001030307) Dec 2 04:18:45 localhost python3.9[130046]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45430 DF PROTO=TCP SPT=42016 DPT=9102 SEQ=432651410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538BFBE0000000001030307) Dec 2 04:18:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43817 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538CB220000000001030307) Dec 2 04:18:49 localhost python3.9[130187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:18:51 localhost python3.9[130280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63790 DF PROTO=TCP SPT=60670 DPT=9101 SEQ=959441373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538D7220000000001030307) Dec 2 04:18:52 localhost python3.9[130372]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Dec 2 04:18:54 localhost kernel: SELinux: Converting 2743 SID table entries... Dec 2 04:18:54 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:18:54 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:18:54 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:18:54 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:18:54 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:18:54 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:18:54 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:18:55 localhost python3.9[130501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:56 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Dec 2 04:18:56 localhost python3.9[130599]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:18:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23292 DF PROTO=TCP SPT=59818 DPT=9101 SEQ=1199525092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538EB620000000001030307) Dec 2 04:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45434 DF PROTO=TCP SPT=42016 DPT=9102 SEQ=432651410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538FB220000000001030307) Dec 2 04:19:02 localhost python3.9[130693]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:19:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7094 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53904900000000001030307) Dec 2 04:19:03 localhost python3.9[130938]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:19:04 localhost python3.9[131028]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7095 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53908A20000000001030307) Dec 2 04:19:05 localhost python3.9[131122]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7096 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53910A20000000001030307) Dec 2 04:19:09 localhost python3.9[131216]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21631 DF PROTO=TCP SPT=50206 DPT=9100 SEQ=2930293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5391F220000000001030307) Dec 2 04:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23692 DF PROTO=TCP SPT=45836 DPT=9105 SEQ=3907984531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53929A20000000001030307) Dec 2 04:19:13 localhost python3.9[131310]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:19:13 localhost systemd[1]: Reloading. Dec 2 04:19:13 localhost systemd-rc-local-generator[131342]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:19:13 localhost systemd-sysv-generator[131345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:19:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:19:14 localhost python3.9[131442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:19:15 localhost python3.9[131534]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41448 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53934EE0000000001030307) Dec 2 04:19:16 localhost python3.9[131628]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:17 localhost python3.9[131720]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:17 localhost python3.9[131812]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:18 localhost python3.9[131885]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667157.4495926-565-21763257614740/.source _original_basename=.78yhok8x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41450 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53940E20000000001030307) Dec 2 04:19:19 localhost python3.9[131977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:19 localhost python3.9[132069]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Dec 2 04:19:20 localhost python3.9[132161]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:21 localhost python3.9[132253]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23294 DF PROTO=TCP SPT=59818 DPT=9101 SEQ=1199525092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5394B220000000001030307) Dec 2 04:19:22 localhost python3.9[132326]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667161.0989594-691-238459937524891/.source.yaml _original_basename=.563189n1 follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:23 localhost python3.9[132418]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Dec 2 04:19:24 localhost sshd[132433]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:24 localhost sshd[132447]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:25 localhost ansible-async_wrapper.py[132525]: Invoked with j890941955757 300 /home/zuul/.ansible/tmp/ansible-tmp-1764667164.8049212-763-49185667276614/AnsiballZ_edpm_os_net_config.py _ Dec 2 04:19:25 localhost ansible-async_wrapper.py[132528]: Starting module and watcher Dec 2 04:19:25 localhost ansible-async_wrapper.py[132528]: Start watching 132529 (300) Dec 2 04:19:25 localhost ansible-async_wrapper.py[132529]: Start module (132529) Dec 2 04:19:25 localhost ansible-async_wrapper.py[132525]: Return async_wrapper task started. Dec 2 04:19:25 localhost python3.9[132530]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Dec 2 04:19:26 localhost ansible-async_wrapper.py[132529]: Module complete (132529) Dec 2 04:19:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58160 DF PROTO=TCP SPT=45708 DPT=9101 SEQ=1738078137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53960620000000001030307) Dec 2 04:19:29 localhost python3.9[132622]: ansible-ansible.legacy.async_status Invoked with jid=j890941955757.132525 mode=status _async_dir=/root/.ansible_async Dec 2 04:19:30 localhost python3.9[132681]: ansible-ansible.legacy.async_status Invoked with jid=j890941955757.132525 mode=cleanup _async_dir=/root/.ansible_async Dec 2 04:19:30 localhost ansible-async_wrapper.py[132528]: Done in kid B. Dec 2 04:19:30 localhost python3.9[132773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:31 localhost python3.9[132846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667170.204541-829-68851442392888/.source.returncode _original_basename=.i6lpc53s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41452 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53971230000000001030307) Dec 2 04:19:31 localhost python3.9[132938]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:33 localhost python3.9[133011]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667171.4468324-877-155512735449835/.source.cfg _original_basename=.k33vu6i6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59823 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53979C10000000001030307) Dec 2 04:19:33 localhost python3.9[133103]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:19:33 localhost systemd[1]: Reloading Network Manager... Dec 2 04:19:33 localhost NetworkManager[5967]: [1764667173.8114] audit: op="reload" arg="0" pid=133107 uid=0 result="success" Dec 2 04:19:33 localhost NetworkManager[5967]: [1764667173.8121] config: signal: SIGHUP (no changes from disk) Dec 2 04:19:33 localhost systemd[1]: Reloaded Network Manager. Dec 2 04:19:34 localhost systemd[1]: session-40.scope: Deactivated successfully. Dec 2 04:19:34 localhost systemd[1]: session-40.scope: Consumed 35.416s CPU time. Dec 2 04:19:34 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Dec 2 04:19:34 localhost systemd-logind[760]: Removed session 40. Dec 2 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59824 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5397DE20000000001030307) Dec 2 04:19:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59825 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53985E20000000001030307) Dec 2 04:19:39 localhost sshd[133122]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:39 localhost systemd-logind[760]: New session 41 of user zuul. Dec 2 04:19:39 localhost systemd[1]: Started Session 41 of User zuul. Dec 2 04:19:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61272 DF PROTO=TCP SPT=37962 DPT=9100 SEQ=1499661919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53993220000000001030307) Dec 2 04:19:40 localhost python3.9[133215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:41 localhost python3.9[133309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:19:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21632 DF PROTO=TCP SPT=50206 DPT=9100 SEQ=2930293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5399D220000000001030307) Dec 2 04:19:44 localhost python3.9[133454]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:19:45 localhost systemd[1]: session-41.scope: Deactivated successfully. Dec 2 04:19:45 localhost systemd[1]: session-41.scope: Consumed 2.100s CPU time. Dec 2 04:19:45 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Dec 2 04:19:45 localhost systemd-logind[760]: Removed session 41. Dec 2 04:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38640 DF PROTO=TCP SPT=33550 DPT=9102 SEQ=1544027499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539AA1E0000000001030307) Dec 2 04:19:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59827 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539B5220000000001030307) Dec 2 04:19:50 localhost sshd[133582]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:51 localhost sshd[133597]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:51 localhost systemd-logind[760]: New session 42 of user zuul. Dec 2 04:19:51 localhost systemd[1]: Started Session 42 of User zuul. Dec 2 04:19:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58162 DF PROTO=TCP SPT=45708 DPT=9101 SEQ=1738078137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539C1220000000001030307) Dec 2 04:19:52 localhost python3.9[133692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:53 localhost python3.9[133786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:54 localhost python3.9[133882]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:19:55 localhost python3.9[133936]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30392 DF PROTO=TCP SPT=33936 DPT=9101 SEQ=2207220434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539D5A20000000001030307) Dec 2 04:19:59 localhost python3.9[134030]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:01 localhost python3.9[134177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38644 DF PROTO=TCP SPT=33550 DPT=9102 SEQ=1544027499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539E7220000000001030307) Dec 2 04:20:02 localhost python3.9[134269]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:20:03 localhost python3.9[134373]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:03 localhost python3.9[134421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41966 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539EEF00000000001030307) Dec 2 04:20:04 localhost python3.9[134513]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41967 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539F2E20000000001030307) Dec 2 04:20:05 localhost python3.9[134561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:06 localhost python3.9[134653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41968 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539FAE30000000001030307) Dec 2 04:20:06 localhost python3.9[134745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:07 localhost auditd[726]: Audit daemon rotating log files Dec 2 04:20:07 localhost python3.9[134837]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:08 localhost python3.9[134929]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:09 localhost python3.9[135021]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:20:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55657 DF PROTO=TCP SPT=60176 DPT=9100 SEQ=184674842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A09220000000001030307) Dec 2 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28752 DF PROTO=TCP SPT=52848 DPT=9105 SEQ=2716686921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A14220000000001030307) Dec 2 04:20:14 localhost python3.9[135115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:20:15 localhost python3.9[135209]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:20:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23697 DF PROTO=TCP SPT=45836 DPT=9105 SEQ=3907984531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A1F220000000001030307) Dec 2 04:20:16 localhost python3.9[135301]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:20:17 localhost python3.9[135393]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:20:17 localhost python3.9[135486]: ansible-service_facts Invoked Dec 2 04:20:18 localhost network[135503]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:20:18 localhost network[135504]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:20:18 localhost network[135505]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:20:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41970 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A2B220000000001030307) Dec 2 04:20:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48355 DF PROTO=TCP SPT=46226 DPT=9102 SEQ=3232583679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A3B220000000001030307) Dec 2 04:20:25 localhost python3.9[135827]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:20:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38516 DF PROTO=TCP SPT=48996 DPT=9101 SEQ=1971808841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A4AE20000000001030307) Dec 2 04:20:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28755 DF PROTO=TCP SPT=52848 DPT=9105 SEQ=2716686921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A4D220000000001030307) Dec 2 04:20:30 localhost python3.9[135921]: ansible-package_facts Invoked with manager=['auto'] strategy=first Dec 2 04:20:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48356 DF PROTO=TCP SPT=46226 DPT=9102 SEQ=3232583679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A5B220000000001030307) Dec 2 04:20:32 localhost python3.9[136013]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:33 localhost python3.9[136088]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667231.9347064-659-129508954755631/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:34 localhost python3.9[136182]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42979 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A68220000000001030307) Dec 2 04:20:35 localhost python3.9[136257]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667233.4481237-704-223874985687674/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:36 localhost python3.9[136351]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42980 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A70220000000001030307) Dec 2 04:20:38 localhost python3.9[136445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:39 localhost python3.9[136499]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:20:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9335 DF PROTO=TCP SPT=57912 DPT=9100 SEQ=2118578433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A7D230000000001030307) Dec 2 04:20:41 localhost python3.9[136593]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:41 localhost python3.9[136647]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:20:41 localhost systemd[1]: Stopping NTP client/server... Dec 2 04:20:41 localhost chronyd[26062]: chronyd exiting Dec 2 04:20:41 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 04:20:41 localhost systemd[1]: Stopped NTP client/server. Dec 2 04:20:41 localhost systemd[1]: Starting NTP client/server... Dec 2 04:20:42 localhost chronyd[136655]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 04:20:42 localhost chronyd[136655]: Frequency -30.246 +/- 0.280 ppm read from /var/lib/chrony/drift Dec 2 04:20:42 localhost chronyd[136655]: Loaded seccomp filter (level 2) Dec 2 04:20:42 localhost systemd[1]: Started NTP client/server. Dec 2 04:20:42 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Dec 2 04:20:42 localhost systemd[1]: session-42.scope: Deactivated successfully. Dec 2 04:20:42 localhost systemd[1]: session-42.scope: Consumed 28.933s CPU time. Dec 2 04:20:42 localhost systemd-logind[760]: Removed session 42. Dec 2 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46652 DF PROTO=TCP SPT=44054 DPT=9105 SEQ=3194187341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A89230000000001030307) Dec 2 04:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36120 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A947E0000000001030307) Dec 2 04:20:47 localhost sshd[136671]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:20:48 localhost systemd-logind[760]: New session 43 of user zuul. Dec 2 04:20:48 localhost systemd[1]: Started Session 43 of User zuul. Dec 2 04:20:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36122 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AA0A30000000001030307) Dec 2 04:20:49 localhost python3.9[136764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:20:50 localhost python3.9[136860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:51 localhost python3.9[136965]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:51 localhost python3.9[137043]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.kigequg_ recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38518 DF PROTO=TCP SPT=48996 DPT=9101 SEQ=1971808841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AAB220000000001030307) Dec 2 04:20:52 localhost python3.9[137168]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:53 localhost python3.9[137258]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667252.036441-145-144920253995012/.source _original_basename=.mryekves follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:53 localhost python3.9[137350]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:55 localhost python3.9[137442]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:55 localhost python3.9[137515]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667254.6229198-217-242864558704562/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:56 localhost python3.9[137607]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=442 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AC0220000000001030307) Dec 2 04:20:57 localhost python3.9[137680]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667256.4510932-217-207855007429213/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:58 localhost python3.9[137772]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:58 localhost python3.9[137864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:59 localhost python3.9[137937]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667258.2181497-329-70111803024865/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:59 localhost python3.9[138029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:00 localhost python3.9[138102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667259.342999-374-145756951399975/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:01 localhost python3.9[138194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:21:01 localhost systemd[1]: Reloading. Dec 2 04:21:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36124 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AD1220000000001030307) Dec 2 04:21:01 localhost systemd-rc-local-generator[138218]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:01 localhost systemd-sysv-generator[138222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:01 localhost systemd[1]: Reloading. Dec 2 04:21:01 localhost systemd-rc-local-generator[138260]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:01 localhost systemd-sysv-generator[138263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:02 localhost systemd[1]: Starting EDPM Container Shutdown... Dec 2 04:21:02 localhost systemd[1]: Finished EDPM Container Shutdown. Dec 2 04:21:02 localhost python3.9[138363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:03 localhost python3.9[138436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667262.297252-443-13874881280350/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2566 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AD9500000000001030307) Dec 2 04:21:03 localhost python3.9[138528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2567 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53ADD620000000001030307) Dec 2 04:21:05 localhost python3.9[138601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667263.4632423-487-60601841819601/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:06 localhost python3.9[138693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:21:06 localhost systemd[1]: Reloading. Dec 2 04:21:06 localhost systemd-rc-local-generator[138719]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:06 localhost systemd-sysv-generator[138725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:06 localhost systemd[1]: Starting Create netns directory... Dec 2 04:21:06 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:21:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:21:06 localhost systemd[1]: Finished Create netns directory. Dec 2 04:21:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2568 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AE5630000000001030307) Dec 2 04:21:08 localhost python3.9[138827]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:21:08 localhost network[138844]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:21:08 localhost network[138845]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:21:08 localhost network[138846]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:21:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39302 DF PROTO=TCP SPT=40848 DPT=9100 SEQ=2009350041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AF3230000000001030307) Dec 2 04:21:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18553 DF PROTO=TCP SPT=45392 DPT=9105 SEQ=2193856773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AFE630000000001030307) Dec 2 04:21:13 localhost python3.9[139048]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:14 localhost python3.9[139123]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667273.5136456-611-84129631988213/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:15 localhost python3.9[139216]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:21:15 localhost systemd[1]: Reloading OpenSSH server daemon... Dec 2 04:21:15 localhost systemd[1]: Reloaded OpenSSH server daemon. Dec 2 04:21:15 localhost sshd[119204]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42345 DF PROTO=TCP SPT=52450 DPT=9102 SEQ=3839931986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B09AF0000000001030307) Dec 2 04:21:17 localhost python3.9[139312]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:18 localhost python3.9[139404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2570 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B15220000000001030307) Dec 2 04:21:19 localhost python3.9[139477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667278.148781-704-85892581619172/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:20 localhost python3.9[139569]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 2 04:21:20 localhost systemd[1]: Starting Time & Date Service... Dec 2 04:21:20 localhost systemd[1]: Started Time & Date Service. Dec 2 04:21:21 localhost python3.9[139665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:21 localhost python3.9[139757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=444 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B21220000000001030307) Dec 2 04:21:22 localhost python3.9[139830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667281.1986651-809-88046430470728/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:22 localhost python3.9[139922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:23 localhost python3.9[139995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667282.3685393-854-118311154061231/.source.yaml _original_basename=.cbtobiq8 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:24 localhost python3.9[140087]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:24 localhost python3.9[140162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667283.5438411-899-234846099999234/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:25 localhost python3.9[140254]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:26 localhost python3.9[140347]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27852 DF PROTO=TCP SPT=49166 DPT=9101 SEQ=2935783225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B35220000000001030307) Dec 2 04:21:27 localhost python3[140440]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18556 DF PROTO=TCP SPT=45392 DPT=9105 SEQ=2193856773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B37220000000001030307) Dec 2 04:21:28 localhost python3.9[140532]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:28 localhost python3.9[140605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667287.6533875-1016-184812338118586/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:29 localhost python3.9[140697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:30 localhost python3.9[140770]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667289.0495946-1061-227284860251009/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:30 localhost python3.9[140862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42349 DF PROTO=TCP SPT=52450 DPT=9102 SEQ=3839931986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B45220000000001030307) Dec 2 04:21:31 localhost python3.9[140936]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667290.3169687-1106-18509187840266/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:31 localhost python3.9[141028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:32 localhost python3.9[141101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667291.5028868-1152-170243589156562/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:33 localhost python3.9[141193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:33 localhost python3.9[141266]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667292.6886299-1196-270305456228576/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:34 localhost python3.9[141358]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:35 localhost python3.9[141450]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2571 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B55220000000001030307) Dec 2 04:21:36 localhost python3.9[141545]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:36 localhost python3.9[141638]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:37 localhost python3.9[141730]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42984 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B5F220000000001030307) Dec 2 04:21:38 localhost python3.9[141822]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 2 04:21:39 localhost python3.9[141915]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 2 04:21:40 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Dec 2 04:21:40 localhost systemd[1]: session-43.scope: Deactivated successfully. Dec 2 04:21:40 localhost systemd[1]: session-43.scope: Consumed 28.259s CPU time. Dec 2 04:21:40 localhost systemd-logind[760]: Removed session 43. Dec 2 04:21:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20774 DF PROTO=TCP SPT=53124 DPT=9100 SEQ=695088761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B69230000000001030307) Dec 2 04:21:45 localhost sshd[141931]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7468 DF PROTO=TCP SPT=54218 DPT=9102 SEQ=4218921566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B7EDE0000000001030307) Dec 2 04:21:46 localhost systemd-logind[760]: New session 44 of user zuul. Dec 2 04:21:46 localhost systemd[1]: Started Session 44 of User zuul. Dec 2 04:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46657 DF PROTO=TCP SPT=44054 DPT=9105 SEQ=3194187341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B7F220000000001030307) Dec 2 04:21:46 localhost python3.9[142026]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 2 04:21:48 localhost python3.9[142118]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:21:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56992 DF PROTO=TCP SPT=44614 DPT=9101 SEQ=3857717863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B8EA10000000001030307) Dec 2 04:21:50 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 04:21:50 localhost python3.9[142214]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Dec 2 04:21:52 localhost python3.9[142306]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2rgthf84 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:52 localhost python3.9[142381]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2rgthf84 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667311.5976827-191-157810556419856/.source.2rgthf84 _original_basename=._7njjbyl follow=False checksum=9674ae9a797ab88dd38896b99c4666372998fea7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:53 localhost systemd[1]: tmp-crun.rALYLK.mount: Deactivated successfully. Dec 2 04:21:53 localhost podman[142499]: 2025-12-02 09:21:53.492390835 +0000 UTC m=+0.087003534 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 04:21:53 localhost podman[142499]: 2025-12-02 09:21:53.604310679 +0000 UTC m=+0.198923478 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git) Dec 2 04:21:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=445 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B9F220000000001030307) Dec 2 04:21:54 localhost python3.9[142699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:21:56 localhost python3.9[142813]: ansible-ansible.builtin.blockinfile Invoked with block=np0005541914.localdomain,192.168.122.108,np0005541914* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=#012np0005541914.localdomain,192.168.122.108,np0005541914* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGWCSLJV2aPwMTfOaIZ+xjv1QFJPyldmo6H+V71SAll#012np0005541914.localdomain,192.168.122.108,np0005541914* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoWDrioobP7nWM6onZB+AZBuk/AQQ7zXxT58XHHnNVCXAZxKDdYUpn8CqfQBodfVNr1sWDyzBr0D5lMGYZypzo=#012np0005541909.localdomain,192.168.122.103,np0005541909* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=#012np0005541909.localdomain,192.168.122.103,np0005541909* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7RcuDge6wF/g+qZxY6m8WG6IEuMAvvdJQnnCjLs+Z1#012np0005541909.localdomain,192.168.122.103,np0005541909* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5sNXub2DBEGdchrrXonnWitouBamsCHQlfu1Eq48/u/VA5EJmoCHsMI/KSOMxMnSS+uUeGceHpl9AyeHtY2NU=#012np0005541910.localdomain,192.168.122.104,np0005541910* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=#012np0005541910.localdomain,192.168.122.104,np0005541910* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIx+QMGsIWmPvyCeFcRzy+Z3KrW6oIHjAujq2mTiluKE#012np0005541910.localdomain,192.168.122.104,np0005541910* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPiujdvwsNBrUjQMVBj6TBCEcpbfZIgHcCBzjuRUWPac2ltR7NNO2aF0KEDTH4F4qoWK7fw0fn0UFKuTrY4INV8=#012np0005541913.localdomain,192.168.122.107,np0005541913* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=#012np0005541913.localdomain,192.168.122.107,np0005541913* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEGKyrd1x8JIpNEVeXNPog2z4+Z1Gyh32lFLn9uh2H3I#012np0005541913.localdomain,192.168.122.107,np0005541913* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGOHjEHyYQ71qgLjQWD4LGL0rAKniN6cBK/Yx+b+dGqDveVXKGlkaXQOOfCp4GEX5fDI6bqBjCB02Ool/6wTT8=#012np0005541911.localdomain,192.168.122.105,np0005541911* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=#012np0005541911.localdomain,192.168.122.105,np0005541911* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILT7VjxC/vKVj4DmZTIjCQwrK+UN5wih4A5ddEFb5wLX#012np0005541911.localdomain,192.168.122.105,np0005541911* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJ5o8j1+/xDc8zMV2yChXY+U6nf1GT6sS3GGAkd+aR/6mUWuiQzjkFESsidYGPHaqz55q4REeXXQtW6T8mmqzU=#012np0005541912.localdomain,192.168.122.106,np0005541912* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=#012np0005541912.localdomain,192.168.122.106,np0005541912* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZZ0KsiMflqlnr0GTYoucjExbwZ18yPSOiSsfRMt90v#012np0005541912.localdomain,192.168.122.106,np0005541912* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGm4CXNWO0ZHMO4eJHc4n6NO7LQlY2+Ctp7F81Y3AEXQl3GIl2c/UCuL0O5ZJj6nEB654FSLAuOOifViFW8rlDc=#012 create=True mode=0644 path=/tmp/ansible.2rgthf84 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:58 localhost python3.9[142905]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2rgthf84' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:00 localhost python3.9[142999]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2rgthf84 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:01 localhost systemd[1]: session-44.scope: Deactivated successfully. Dec 2 04:22:01 localhost systemd[1]: session-44.scope: Consumed 4.185s CPU time. Dec 2 04:22:01 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Dec 2 04:22:01 localhost systemd-logind[760]: Removed session 44. Dec 2 04:22:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21489 DF PROTO=TCP SPT=55132 DPT=9882 SEQ=2040504519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BC3B10000000001030307) Dec 2 04:22:07 localhost sshd[143014]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:07 localhost systemd-logind[760]: New session 45 of user zuul. Dec 2 04:22:07 localhost systemd[1]: Started Session 45 of User zuul. Dec 2 04:22:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31649 DF PROTO=TCP SPT=46550 DPT=9100 SEQ=1511025377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BD6870000000001030307) Dec 2 04:22:08 localhost sshd[143063]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:09 localhost python3.9[143109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:10 localhost python3.9[143205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:22:11 localhost python3.9[143299]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:22:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23805 DF PROTO=TCP SPT=52734 DPT=9105 SEQ=2099678045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BE4C70000000001030307) Dec 2 04:22:12 localhost python3.9[143392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:13 localhost python3.9[143485]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:13 localhost python3.9[143579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:14 localhost python3.9[143674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:15 localhost systemd[1]: session-45.scope: Deactivated successfully. Dec 2 04:22:15 localhost systemd[1]: session-45.scope: Consumed 3.893s CPU time. Dec 2 04:22:15 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Dec 2 04:22:15 localhost systemd-logind[760]: Removed session 45. Dec 2 04:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2060 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BF40E0000000001030307) Dec 2 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2061 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BF8220000000001030307) Dec 2 04:22:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2062 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C00230000000001030307) Dec 2 04:22:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36972 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C03D00000000001030307) Dec 2 04:22:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36973 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C07E20000000001030307) Dec 2 04:22:21 localhost sshd[143689]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:21 localhost systemd-logind[760]: New session 46 of user zuul. Dec 2 04:22:21 localhost systemd[1]: Started Session 46 of User zuul. Dec 2 04:22:22 localhost python3.9[143782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36974 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C0FE30000000001030307) Dec 2 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2063 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C0FE30000000001030307) Dec 2 04:22:23 localhost python3.9[143878]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:22:24 localhost python3.9[143932]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:22:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36975 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C1FA20000000001030307) Dec 2 04:22:28 localhost python3.9[144024]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:30 localhost python3.9[144117]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:31 localhost python3.9[144209]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2064 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C31220000000001030307) Dec 2 04:22:32 localhost python3.9[144301]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:33 localhost python3.9[144391]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:22:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25881 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C38E10000000001030307) Dec 2 04:22:34 localhost python3.9[144481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25882 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C3CE20000000001030307) Dec 2 04:22:34 localhost python3.9[144573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:35 localhost systemd[1]: session-46.scope: Deactivated successfully. Dec 2 04:22:35 localhost systemd[1]: session-46.scope: Consumed 9.210s CPU time. Dec 2 04:22:35 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Dec 2 04:22:35 localhost systemd-logind[760]: Removed session 46. Dec 2 04:22:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25883 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C44E20000000001030307) Dec 2 04:22:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25884 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C54A20000000001030307) Dec 2 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53566 DF PROTO=TCP SPT=41358 DPT=9105 SEQ=1471419076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C5DE20000000001030307) Dec 2 04:22:43 localhost sshd[144588]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:43 localhost systemd-logind[760]: New session 47 of user zuul. Dec 2 04:22:43 localhost systemd[1]: Started Session 47 of User zuul. Dec 2 04:22:44 localhost python3.9[144681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44548 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C693E0000000001030307) Dec 2 04:22:46 localhost python3.9[144777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:47 localhost python3.9[144869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:48 localhost python3.9[144942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667366.8549404-186-222855213792009/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:49 localhost python3.9[145034]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25885 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C75230000000001030307) Dec 2 04:22:49 localhost python3.9[145126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:50 localhost python3.9[145199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667369.1697154-259-139233647535715/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:51 localhost python3.9[145291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:51 localhost chronyd[136655]: Selected source 162.159.200.1 (pool.ntp.org) Dec 2 04:22:52 localhost python3.9[145383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:52 localhost python3.9[145456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667371.4708521-335-200761063864217/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44551 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C85220000000001030307) Dec 2 04:22:53 localhost python3.9[145548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:54 localhost python3.9[145640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:54 localhost python3.9[145713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667373.923136-411-22533227368450/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:55 localhost python3.9[145805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:56 localhost python3.9[145942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:56 localhost python3.9[146032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667375.8353608-486-20674791776152/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54398 DF PROTO=TCP SPT=46714 DPT=9101 SEQ=325028808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C94E30000000001030307) Dec 2 04:22:57 localhost python3.9[146139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:58 localhost python3.9[146231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:58 localhost python3.9[146304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667377.7392528-559-167024099175096/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:59 localhost python3.9[146396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:00 localhost python3.9[146488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:01 localhost python3.9[146561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667379.6445596-631-4177287125721/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44552 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CA5230000000001030307) Dec 2 04:23:01 localhost python3.9[146653]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:03 localhost python3.9[146745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:03 localhost python3.9[146818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667382.597127-702-113803606209010/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60192 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CAE110000000001030307) Dec 2 04:23:03 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Dec 2 04:23:03 localhost systemd[1]: session-47.scope: Deactivated successfully. Dec 2 04:23:03 localhost systemd[1]: session-47.scope: Consumed 12.454s CPU time. Dec 2 04:23:03 localhost systemd-logind[760]: Removed session 47. Dec 2 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60193 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CB2230000000001030307) Dec 2 04:23:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60194 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CBA220000000001030307) Dec 2 04:23:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13741 DF PROTO=TCP SPT=60270 DPT=9100 SEQ=2993189913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CC7220000000001030307) Dec 2 04:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10988 DF PROTO=TCP SPT=50888 DPT=9105 SEQ=2037304202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CD3220000000001030307) Dec 2 04:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1105 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CDE6E0000000001030307) Dec 2 04:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1107 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CEA620000000001030307) Dec 2 04:23:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54400 DF PROTO=TCP SPT=46714 DPT=9101 SEQ=325028808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CF5220000000001030307) Dec 2 04:23:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40391 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=1057897481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D09E20000000001030307) Dec 2 04:23:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1109 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D1B230000000001030307) Dec 2 04:23:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50080 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D23410000000001030307) Dec 2 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50081 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D27620000000001030307) Dec 2 04:23:35 localhost sshd[146833]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:23:35 localhost systemd-logind[760]: New session 48 of user zuul. Dec 2 04:23:35 localhost systemd[1]: Started Session 48 of User zuul. Dec 2 04:23:36 localhost python3.9[146928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50082 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D2F620000000001030307) Dec 2 04:23:37 localhost python3.9[147020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:38 localhost python3.9[147093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667416.847182-64-197282402515176/.source.conf _original_basename=ceph.conf follow=False checksum=bb050c8012c4b6ce73dbd1d555a91a361a703a4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:38 localhost python3.9[147185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:39 localhost python3.9[147258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667418.173121-64-251895035122086/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:39 localhost systemd[1]: session-48.scope: Deactivated successfully. Dec 2 04:23:39 localhost systemd[1]: session-48.scope: Consumed 2.154s CPU time. Dec 2 04:23:39 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Dec 2 04:23:39 localhost systemd-logind[760]: Removed session 48. Dec 2 04:23:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9155 DF PROTO=TCP SPT=56848 DPT=9100 SEQ=3703839231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D3D230000000001030307) Dec 2 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57403 DF PROTO=TCP SPT=56976 DPT=9105 SEQ=617448496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D48620000000001030307) Dec 2 04:23:45 localhost sshd[147273]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:23:45 localhost systemd-logind[760]: New session 49 of user zuul. Dec 2 04:23:45 localhost systemd[1]: Started Session 49 of User zuul. Dec 2 04:23:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53571 DF PROTO=TCP SPT=41358 DPT=9105 SEQ=1471419076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D53220000000001030307) Dec 2 04:23:46 localhost python3.9[147366]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:23:47 localhost python3.9[147462]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:48 localhost python3.9[147554]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:48 localhost python3.9[147644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:23:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50084 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D5F220000000001030307) Dec 2 04:23:50 localhost python3.9[147736]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 04:23:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40393 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=1057897481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D6B220000000001030307) Dec 2 04:23:52 localhost python3.9[147828]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:23:53 localhost python3.9[147882]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:23:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44036 DF PROTO=TCP SPT=57400 DPT=9101 SEQ=3631023866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D7F230000000001030307) Dec 2 04:23:57 localhost python3.9[147976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:23:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57406 DF PROTO=TCP SPT=56976 DPT=9105 SEQ=617448496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D81220000000001030307) Dec 2 04:23:58 localhost python3[148134]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Dec 2 04:23:59 localhost python3.9[148241]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:59 localhost python3.9[148333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:00 localhost python3.9[148381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25563 DF PROTO=TCP SPT=60764 DPT=9102 SEQ=2472583394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D8F230000000001030307) Dec 2 04:24:01 localhost python3.9[148473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:01 localhost python3.9[148521]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.h3scoodm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:02 localhost python3.9[148613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:03 localhost python3.9[148661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:03 localhost python3.9[148753]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39208 DF PROTO=TCP SPT=42224 DPT=9882 SEQ=141486871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D9C620000000001030307) Dec 2 04:24:05 localhost python3[148846]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:24:06 localhost python3.9[148938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39209 DF PROTO=TCP SPT=42224 DPT=9882 SEQ=141486871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DA4620000000001030307) Dec 2 04:24:06 localhost python3.9[149013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667445.8260493-434-174312310178595/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:07 localhost python3.9[149105]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:08 localhost python3.9[149180]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667447.1363156-479-122724017332538/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:08 localhost python3.9[149272]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:09 localhost python3.9[149347]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667448.3327696-523-261271181287278/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61248 DF PROTO=TCP SPT=58590 DPT=9100 SEQ=3499258261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DB3220000000001030307) Dec 2 04:24:11 localhost python3.9[149439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:11 localhost python3.9[149514]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667449.9358726-569-55813683967494/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:12 localhost python3.9[149606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16634 DF PROTO=TCP SPT=51906 DPT=9105 SEQ=863709002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DBDA20000000001030307) Dec 2 04:24:13 localhost python3.9[149681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667451.8546398-614-227539375211042/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:14 localhost python3.9[149773]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:14 localhost python3.9[149865]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:15 localhost python3.9[149960]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1115 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DC8CE0000000001030307) Dec 2 04:24:16 localhost python3.9[150052]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:16 localhost python3.9[150145]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:17 localhost python3.9[150239]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:18 localhost python3.9[150335]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:18 localhost sshd[150418]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1117 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DD4E20000000001030307) Dec 2 04:24:19 localhost python3.9[150426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:24:20 localhost python3.9[150520]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005541914.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:80:ac:27:10" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:20 localhost ovs-vsctl[150521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005541914.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:80:ac:27:10 external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Dec 2 04:24:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44038 DF PROTO=TCP SPT=57400 DPT=9101 SEQ=3631023866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DDF220000000001030307) Dec 2 04:24:21 localhost python3.9[150613]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:22 localhost python3.9[150706]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:24 localhost python3.9[150801]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:24 localhost python3.9[150893]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:25 localhost python3.9[150941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:25 localhost python3.9[151033]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:26 localhost python3.9[151081]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:27 localhost python3.9[151173]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57143 DF PROTO=TCP SPT=35940 DPT=9101 SEQ=3024191987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DF4620000000001030307) Dec 2 04:24:27 localhost python3.9[151265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:28 localhost python3.9[151313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:28 localhost python3.9[151405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:29 localhost python3.9[151453]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:30 localhost python3.9[151545]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:30 localhost systemd[1]: Reloading. Dec 2 04:24:30 localhost systemd-sysv-generator[151574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:30 localhost systemd-rc-local-generator[151569]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1119 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E05230000000001030307) Dec 2 04:24:31 localhost python3.9[151674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:32 localhost python3.9[151722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:33 localhost python3.9[151814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63826 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E0DA20000000001030307) Dec 2 04:24:33 localhost python3.9[151862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63827 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E11A20000000001030307) Dec 2 04:24:34 localhost python3.9[151954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:34 localhost systemd[1]: Reloading. Dec 2 04:24:34 localhost systemd-sysv-generator[151986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:34 localhost systemd-rc-local-generator[151983]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:35 localhost systemd[1]: Starting Create netns directory... Dec 2 04:24:35 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:24:35 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:24:35 localhost systemd[1]: Finished Create netns directory. Dec 2 04:24:36 localhost python3.9[152091]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63828 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E19A20000000001030307) Dec 2 04:24:36 localhost python3.9[152183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:37 localhost python3.9[152256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667476.2761338-1345-42670944581187/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:38 localhost python3.9[152348]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:39 localhost python3.9[152440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:39 localhost python3.9[152515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667478.5716455-1420-82931941760283/.source.json _original_basename=._y7o53nv follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32867 DF PROTO=TCP SPT=33294 DPT=9100 SEQ=2871554951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E27220000000001030307) Dec 2 04:24:40 localhost python3.9[152607]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:42 localhost python3.9[152864]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Dec 2 04:24:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61249 DF PROTO=TCP SPT=58590 DPT=9100 SEQ=3499258261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E31230000000001030307) Dec 2 04:24:43 localhost python3.9[152956]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:24:44 localhost python3.9[153048]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63708 DF PROTO=TCP SPT=49276 DPT=9102 SEQ=4061710079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E3DFE0000000001030307) Dec 2 04:24:48 localhost python3[153168]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:24:48 localhost python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012 "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:38:47.246477714Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345722821,#012 "VirtualSize": 345722821,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012 "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Dec 2 04:24:48 localhost podman[153220]: 2025-12-02 09:24:48.588756756 +0000 UTC m=+0.095602407 container remove b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:24:48 localhost python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Dec 2 04:24:48 localhost podman[153234]: Dec 2 04:24:48 localhost podman[153234]: 2025-12-02 09:24:48.672379934 +0000 UTC m=+0.070011223 container create c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 2 04:24:48 localhost podman[153234]: 2025-12-02 09:24:48.631878845 +0000 UTC m=+0.029510174 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:24:48 localhost python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:24:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63830 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E49220000000001030307) Dec 2 04:24:49 localhost python3.9[153362]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:50 localhost python3.9[153456]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:50 localhost python3.9[153502]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:51 localhost python3.9[153593]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667490.6350732-1684-171162379760655/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:51 localhost python3.9[153639]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:24:51 localhost systemd[1]: Reloading. Dec 2 04:24:51 localhost systemd-rc-local-generator[153664]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:51 localhost systemd-sysv-generator[153669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57145 DF PROTO=TCP SPT=35940 DPT=9101 SEQ=3024191987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E55220000000001030307) Dec 2 04:24:52 localhost python3.9[153721]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:52 localhost systemd[1]: Reloading. Dec 2 04:24:52 localhost systemd-rc-local-generator[153747]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:52 localhost systemd-sysv-generator[153753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:53 localhost systemd[1]: Starting ovn_controller container... Dec 2 04:24:53 localhost systemd[1]: Started libcrun container. Dec 2 04:24:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4b0753893eeb8df37e9b4452d463ad6765466abfa1ffd2a9924ebcac6d8353/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 04:24:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:24:53 localhost podman[153763]: 2025-12-02 09:24:53.260574348 +0000 UTC m=+0.168243520 container init c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:24:53 localhost ovn_controller[153778]: + sudo -E kolla_set_configs Dec 2 04:24:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:24:53 localhost podman[153763]: 2025-12-02 09:24:53.305252585 +0000 UTC m=+0.212921707 container start c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:24:53 localhost edpm-start-podman-container[153763]: ovn_controller Dec 2 04:24:53 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 04:24:53 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 04:24:53 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 04:24:53 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 04:24:53 localhost podman[153786]: 2025-12-02 09:24:53.422270656 +0000 UTC m=+0.113309909 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:24:53 localhost podman[153786]: 2025-12-02 09:24:53.437697448 +0000 UTC m=+0.128736701 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 04:24:53 localhost podman[153786]: unhealthy Dec 2 04:24:53 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:24:53 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Failed with result 'exit-code'. Dec 2 04:24:53 localhost systemd[153805]: Queued start job for default target Main User Target. Dec 2 04:24:53 localhost systemd[153805]: Created slice User Application Slice. Dec 2 04:24:53 localhost systemd[153805]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 04:24:53 localhost systemd[153805]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:24:53 localhost systemd[153805]: Reached target Paths. Dec 2 04:24:53 localhost systemd[153805]: Reached target Timers. Dec 2 04:24:53 localhost systemd[153805]: Starting D-Bus User Message Bus Socket... Dec 2 04:24:53 localhost systemd[153805]: Starting Create User's Volatile Files and Directories... Dec 2 04:24:53 localhost edpm-start-podman-container[153762]: Creating additional drop-in dependency for "ovn_controller" (c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf) Dec 2 04:24:53 localhost systemd[153805]: Finished Create User's Volatile Files and Directories. Dec 2 04:24:53 localhost systemd[153805]: Listening on D-Bus User Message Bus Socket. Dec 2 04:24:53 localhost systemd[153805]: Reached target Sockets. Dec 2 04:24:53 localhost systemd[153805]: Reached target Basic System. Dec 2 04:24:53 localhost systemd[153805]: Reached target Main User Target. Dec 2 04:24:53 localhost systemd[153805]: Startup finished in 136ms. Dec 2 04:24:53 localhost systemd[1]: Started User Manager for UID 0. Dec 2 04:24:53 localhost systemd[1]: Started Session c11 of User root. Dec 2 04:24:53 localhost systemd[1]: Reloading. Dec 2 04:24:53 localhost ovn_controller[153778]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:24:53 localhost ovn_controller[153778]: INFO:__main__:Validating config file Dec 2 04:24:53 localhost ovn_controller[153778]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:24:53 localhost ovn_controller[153778]: INFO:__main__:Writing out command to execute Dec 2 04:24:53 localhost systemd-sysv-generator[153864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:53 localhost systemd-rc-local-generator[153860]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:53 localhost systemd[1]: Started ovn_controller container. Dec 2 04:24:53 localhost systemd[1]: session-c11.scope: Deactivated successfully. Dec 2 04:24:53 localhost ovn_controller[153778]: ++ cat /run_command Dec 2 04:24:53 localhost ovn_controller[153778]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 2 04:24:53 localhost ovn_controller[153778]: + ARGS= Dec 2 04:24:53 localhost ovn_controller[153778]: + sudo kolla_copy_cacerts Dec 2 04:24:53 localhost systemd[1]: Started Session c12 of User root. Dec 2 04:24:53 localhost systemd[1]: session-c12.scope: Deactivated successfully. Dec 2 04:24:53 localhost ovn_controller[153778]: + [[ ! -n '' ]] Dec 2 04:24:53 localhost ovn_controller[153778]: + . kolla_extend_start Dec 2 04:24:53 localhost ovn_controller[153778]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 2 04:24:53 localhost ovn_controller[153778]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Dec 2 04:24:53 localhost ovn_controller[153778]: + umask 0022 Dec 2 04:24:53 localhost ovn_controller[153778]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00004|main|INFO|OVS IDL reconnected, force recompute. Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00013|main|INFO|OVS feature set changed, force recompute. Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00021|main|INFO|OVS feature set changed, force recompute. Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:54 localhost python3.9[153975]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:54 localhost ovs-vsctl[153976]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Dec 2 04:24:55 localhost python3.9[154068]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:55 localhost ovs-vsctl[154070]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Dec 2 04:24:57 localhost python3.9[154163]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:57 localhost ovs-vsctl[154164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Dec 2 04:24:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12318 DF PROTO=TCP SPT=53152 DPT=9101 SEQ=1380458474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E69A20000000001030307) Dec 2 04:24:57 localhost systemd[1]: session-49.scope: Deactivated successfully. Dec 2 04:24:57 localhost systemd[1]: session-49.scope: Consumed 41.334s CPU time. Dec 2 04:24:57 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Dec 2 04:24:57 localhost systemd-logind[760]: Removed session 49. Dec 2 04:25:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63712 DF PROTO=TCP SPT=49276 DPT=9102 SEQ=4061710079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E7B220000000001030307) Dec 2 04:25:03 localhost sshd[154258]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:25:03 localhost systemd-logind[760]: New session 51 of user zuul. Dec 2 04:25:03 localhost systemd[1]: Started Session 51 of User zuul. Dec 2 04:25:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39635 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E82D10000000001030307) Dec 2 04:25:03 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 04:25:03 localhost systemd[153805]: Activating special unit Exit the Session... Dec 2 04:25:03 localhost systemd[153805]: Stopped target Main User Target. Dec 2 04:25:03 localhost systemd[153805]: Stopped target Basic System. Dec 2 04:25:03 localhost systemd[153805]: Stopped target Paths. Dec 2 04:25:03 localhost systemd[153805]: Stopped target Sockets. Dec 2 04:25:03 localhost systemd[153805]: Stopped target Timers. Dec 2 04:25:03 localhost systemd[153805]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:25:03 localhost systemd[153805]: Closed D-Bus User Message Bus Socket. Dec 2 04:25:03 localhost systemd[153805]: Stopped Create User's Volatile Files and Directories. Dec 2 04:25:03 localhost systemd[153805]: Removed slice User Application Slice. Dec 2 04:25:03 localhost systemd[153805]: Reached target Shutdown. Dec 2 04:25:03 localhost systemd[153805]: Finished Exit the Session. Dec 2 04:25:03 localhost systemd[153805]: Reached target Exit the Session. Dec 2 04:25:03 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 04:25:03 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 04:25:04 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 04:25:04 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 04:25:04 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 04:25:04 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 04:25:04 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 04:25:04 localhost python3.9[154353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39636 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E86E20000000001030307) Dec 2 04:25:05 localhost python3.9[154449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:06 localhost python3.9[154541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39637 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E8EE30000000001030307) Dec 2 04:25:06 localhost python3.9[154633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:07 localhost python3.9[154725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:08 localhost python3.9[154817]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4197 DF PROTO=TCP SPT=57484 DPT=9100 SEQ=2546861134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E9D220000000001030307) Dec 2 04:25:11 localhost python3.9[154907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:25:12 localhost python3.9[155000]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 04:25:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44713 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=206652984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EA7E20000000001030307) Dec 2 04:25:13 localhost python3.9[155090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:13 localhost python3.9[155163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667512.6817198-220-226362207237117/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:14 localhost python3.9[155253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:15 localhost python3.9[155326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667514.0171661-265-126838285431775/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16639 DF PROTO=TCP SPT=51906 DPT=9105 SEQ=863709002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EB3220000000001030307) Dec 2 04:25:16 localhost python3.9[155418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:25:16 localhost python3.9[155472]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14329 DF PROTO=TCP SPT=46910 DPT=9102 SEQ=955170998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EBF220000000001030307) Dec 2 04:25:21 localhost python3.9[155566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:25:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12320 DF PROTO=TCP SPT=53152 DPT=9101 SEQ=1380458474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EC9220000000001030307) Dec 2 04:25:23 localhost python3.9[155659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:23 localhost python3.9[155730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667522.6252763-376-172714620551088/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:25:24 localhost python3.9[155820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:24 localhost systemd[1]: tmp-crun.e2iNSI.mount: Deactivated successfully. Dec 2 04:25:24 localhost podman[155821]: 2025-12-02 09:25:24.108431026 +0000 UTC m=+0.106115666 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 2 04:25:24 localhost ovn_controller[153778]: 2025-12-02T09:25:24Z|00023|memory|INFO|13080 kB peak resident set size after 30.2 seconds Dec 2 04:25:24 localhost ovn_controller[153778]: 2025-12-02T09:25:24Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Dec 2 04:25:24 localhost podman[155821]: 2025-12-02 09:25:24.187878314 +0000 UTC m=+0.185562904 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:25:24 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:25:24 localhost python3.9[155915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667523.664777-376-230040373073843/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:26 localhost python3.9[156005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:26 localhost python3.9[156076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667525.4554493-509-43200407583977/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:27 localhost python3.9[156166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57255 DF PROTO=TCP SPT=49426 DPT=9101 SEQ=3183603933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EDEA20000000001030307) Dec 2 04:25:27 localhost python3.9[156237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667526.6062455-509-151344027655842/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:29 localhost python3.9[156327]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:30 localhost python3.9[156421]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:30 localhost python3.9[156513]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14331 DF PROTO=TCP SPT=46910 DPT=9102 SEQ=955170998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EEF220000000001030307) Dec 2 04:25:31 localhost python3.9[156561]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:32 localhost python3.9[156653]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:32 localhost python3.9[156701]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:33 localhost python3.9[156793]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53695 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EF8000000000001030307) Dec 2 04:25:33 localhost python3.9[156885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:34 localhost python3.9[156933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53696 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EFC220000000001030307) Dec 2 04:25:35 localhost python3.9[157025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:35 localhost python3.9[157073]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:36 localhost python3.9[157165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:25:36 localhost systemd[1]: Reloading. Dec 2 04:25:36 localhost systemd-rc-local-generator[157188]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:25:36 localhost systemd-sysv-generator[157193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:25:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:25:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53697 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F04220000000001030307) Dec 2 04:25:37 localhost python3.9[157295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:38 localhost python3.9[157343]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:38 localhost python3.9[157435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:39 localhost python3.9[157483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11093 DF PROTO=TCP SPT=50322 DPT=9100 SEQ=2776366170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F11220000000001030307) Dec 2 04:25:40 localhost python3.9[157575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:25:40 localhost systemd[1]: Reloading. Dec 2 04:25:40 localhost systemd-sysv-generator[157601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:25:40 localhost systemd-rc-local-generator[157598]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:25:40 localhost systemd[1]: Starting Create netns directory... Dec 2 04:25:40 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:25:40 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:25:40 localhost systemd[1]: Finished Create netns directory. Dec 2 04:25:42 localhost python3.9[157708]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8137 DF PROTO=TCP SPT=37242 DPT=9105 SEQ=863077683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F1D220000000001030307) Dec 2 04:25:43 localhost python3.9[157800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:43 localhost python3.9[157873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667542.865387-961-89189219767186/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:44 localhost python3.9[157965]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:45 localhost python3.9[158057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11284 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F285E0000000001030307) Dec 2 04:25:46 localhost python3.9[158132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667544.9638226-1036-8980708798820/.source.json _original_basename=.tvrndj_8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:46 localhost python3.9[158224]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11286 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F34620000000001030307) Dec 2 04:25:49 localhost python3.9[158481]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Dec 2 04:25:50 localhost python3.9[158573]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:25:51 localhost python3.9[158665]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:25:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57257 DF PROTO=TCP SPT=49426 DPT=9101 SEQ=3183603933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F3F220000000001030307) Dec 2 04:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:25:55 localhost podman[158708]: 2025-12-02 09:25:55.096188954 +0000 UTC m=+0.094085137 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 04:25:55 localhost podman[158708]: 2025-12-02 09:25:55.17388723 +0000 UTC m=+0.171783423 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 04:25:55 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:25:55 localhost python3[158808]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:25:56 localhost python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",#012 "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:29:20.327314945Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784141054,#012 "VirtualSize": 784141054,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",#012 "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",#012 "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Dec 2 04:25:56 localhost podman[158858]: 2025-12-02 09:25:56.115656625 +0000 UTC m=+0.080355267 container remove 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1) Dec 2 04:25:56 localhost python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Dec 2 04:25:56 localhost podman[158872]: Dec 2 04:25:56 localhost podman[158872]: 2025-12-02 09:25:56.214597661 +0000 UTC m=+0.079888824 container create 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:25:56 localhost podman[158872]: 2025-12-02 09:25:56.179072474 +0000 UTC m=+0.044363627 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:25:56 localhost python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:25:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56727 DF PROTO=TCP SPT=60572 DPT=9101 SEQ=3092294769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F53E30000000001030307) Dec 2 04:25:57 localhost python3.9[159000]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:57 localhost python3.9[159094]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:58 localhost python3.9[159140]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:59 localhost python3.9[159231]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667558.4321272-1300-94824610146451/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:59 localhost python3.9[159277]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:25:59 localhost systemd[1]: Reloading. Dec 2 04:25:59 localhost systemd-rc-local-generator[159300]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:25:59 localhost systemd-sysv-generator[159303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:00 localhost python3.9[159389]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:00 localhost systemd[1]: Reloading. Dec 2 04:26:00 localhost systemd-sysv-generator[159434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:00 localhost systemd-rc-local-generator[159430]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:01 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 2 04:26:01 localhost systemd[1]: Started libcrun container. Dec 2 04:26:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da2e578b04930fec74c4af4c53467c7a43895f3bd10cda05f9c1b3856d5818/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:26:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da2e578b04930fec74c4af4c53467c7a43895f3bd10cda05f9c1b3856d5818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:26:01 localhost podman[159464]: 2025-12-02 09:26:01.302844446 +0000 UTC m=+0.151940076 container init 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:26:01 localhost systemd[1]: tmp-crun.89U60b.mount: Deactivated successfully. Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + sudo -E kolla_set_configs Dec 2 04:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:26:01 localhost podman[159464]: 2025-12-02 09:26:01.345648955 +0000 UTC m=+0.194744585 container start 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 04:26:01 localhost edpm-start-podman-container[159464]: ovn_metadata_agent Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Validating config file Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Copying service configuration files Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Writing out command to execute Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: ++ cat /run_command Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + CMD=neutron-ovn-metadata-agent Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + ARGS= Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + sudo kolla_copy_cacerts Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + [[ ! -n '' ]] Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + . kolla_extend_start Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: Running command: 'neutron-ovn-metadata-agent' Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + umask 0022 Dec 2 04:26:01 localhost ovn_metadata_agent[159477]: + exec neutron-ovn-metadata-agent Dec 2 04:26:01 localhost edpm-start-podman-container[159463]: Creating additional drop-in dependency for "ovn_metadata_agent" (225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1) Dec 2 04:26:01 localhost systemd[1]: Reloading. Dec 2 04:26:01 localhost podman[159485]: 2025-12-02 09:26:01.5266558 +0000 UTC m=+0.171876707 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:26:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11288 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F65220000000001030307) Dec 2 04:26:01 localhost podman[159485]: 2025-12-02 09:26:01.567995154 +0000 UTC m=+0.213216091 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:26:01 localhost systemd-rc-local-generator[159545]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:01 localhost systemd-sysv-generator[159553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:01 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:26:01 localhost systemd[1]: Started ovn_metadata_agent container. Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.090 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.090 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.143 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.145 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.172 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 515e0717-8baa-40e6-ac30-5fb148626504 (UUID: 515e0717-8baa-40e6-ac30-5fb148626504) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.196 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.197 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.201 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.213 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '515e0717-8baa-40e6-ac30-5fb148626504'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': 'b2bdd40e-63a2-5a14-9aa4-7df5929ce52d', 'neutron:ovn-metadata-sb-cfg': '1'}, name=515e0717-8baa-40e6-ac30-5fb148626504, nb_cfg_timestamp=1764667502568, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.215 159483 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.216 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.217 159483 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.217 159483 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.218 159483 INFO oslo_service.service [-] Starting 1 workers#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.223 159483 DEBUG oslo_service.service [-] Started child 159597 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.227 159597 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-387997'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.230 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpanuh74ka/privsep.sock']#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.257 159597 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.258 159597 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.259 159597 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.263 159597 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.265 159597 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.284 159597 INFO eventlet.wsgi.server [-] (159597) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Dec 2 04:26:03 localhost systemd[1]: session-51.scope: Deactivated successfully. Dec 2 04:26:03 localhost systemd[1]: session-51.scope: Consumed 32.721s CPU time. Dec 2 04:26:03 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Dec 2 04:26:03 localhost systemd-logind[760]: Removed session 51. Dec 2 04:26:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62903 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F6D310000000001030307) Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.915 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.916 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpanuh74ka/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.794 159602 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.797 159602 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.799 159602 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.799 159602 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159602#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:03.920 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[54b9440d-c494-4358-94c9-34d6f72f2dae]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62904 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F71230000000001030307) Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.852 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[e556d8a9-fc26-461a-96b1-2d591fea2d1c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.855 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, column=external_ids, values=({'neutron:ovn-metadata-id': 'b2bdd40e-63a2-5a14-9aa4-7df5929ce52d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.856 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.857 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:26:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62905 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F79220000000001030307) Dec 2 04:26:09 localhost sshd[159607]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:26:09 localhost systemd-logind[760]: New session 52 of user zuul. Dec 2 04:26:09 localhost systemd[1]: Started Session 52 of User zuul. Dec 2 04:26:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41489 DF PROTO=TCP SPT=36486 DPT=9100 SEQ=1409631211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F87230000000001030307) Dec 2 04:26:10 localhost python3.9[159700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:26:12 localhost python3.9[159796]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52041 DF PROTO=TCP SPT=44504 DPT=9105 SEQ=3672999032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F92630000000001030307) Dec 2 04:26:13 localhost python3.9[159901]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:13 localhost systemd[1]: tmp-crun.4QOz1O.mount: Deactivated successfully. Dec 2 04:26:13 localhost systemd[1]: libpod-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope: Deactivated successfully. Dec 2 04:26:13 localhost podman[159902]: 2025-12-02 09:26:13.443148779 +0000 UTC m=+0.093373860 container died c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, build-date=2025-11-19T00:35:22Z, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:26:13 localhost podman[159902]: 2025-12-02 09:26:13.483065153 +0000 UTC m=+0.133290214 container cleanup c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-type=git, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 04:26:13 localhost podman[159917]: 2025-12-02 09:26:13.53855346 +0000 UTC m=+0.084441153 container remove c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, build-date=2025-11-19T00:35:22Z, vcs-type=git, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:26:13 localhost systemd[1]: libpod-conmon-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope: Deactivated successfully. Dec 2 04:26:14 localhost systemd[1]: tmp-crun.fefopQ.mount: Deactivated successfully. Dec 2 04:26:14 localhost systemd[1]: var-lib-containers-storage-overlay-f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260-merged.mount: Deactivated successfully. Dec 2 04:26:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a-userdata-shm.mount: Deactivated successfully. Dec 2 04:26:14 localhost python3.9[160023]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:26:14 localhost systemd[1]: Reloading. Dec 2 04:26:14 localhost systemd-rc-local-generator[160048]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:14 localhost systemd-sysv-generator[160052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:15 localhost python3.9[160148]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:26:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44718 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=206652984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F9D230000000001030307) Dec 2 04:26:15 localhost network[160165]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:26:15 localhost network[160166]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:26:15 localhost network[160167]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:26:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62907 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FA9220000000001030307) Dec 2 04:26:21 localhost python3.9[160369]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:21 localhost systemd[1]: Reloading. Dec 2 04:26:21 localhost systemd-rc-local-generator[160394]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:21 localhost systemd-sysv-generator[160400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:21 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Dec 2 04:26:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56729 DF PROTO=TCP SPT=60572 DPT=9101 SEQ=3092294769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FB5220000000001030307) Dec 2 04:26:22 localhost python3.9[160501]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:24 localhost python3.9[160594]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:24 localhost python3.9[160687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:26:26 localhost sshd[160690]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:26:26 localhost systemd[1]: tmp-crun.sTFtFC.mount: Deactivated successfully. Dec 2 04:26:26 localhost podman[160689]: 2025-12-02 09:26:26.10252743 +0000 UTC m=+0.098410176 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:26:26 localhost podman[160689]: 2025-12-02 09:26:26.17592522 +0000 UTC m=+0.171807676 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:26:26 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:26:26 localhost python3.9[160806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59594 DF PROTO=TCP SPT=34650 DPT=9101 SEQ=2590249443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FC9220000000001030307) Dec 2 04:26:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52044 DF PROTO=TCP SPT=44504 DPT=9105 SEQ=3672999032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FCB220000000001030307) Dec 2 04:26:27 localhost python3.9[160899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:28 localhost python3.9[160992]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:29 localhost python3.9[161085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:30 localhost python3.9[161177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27390 DF PROTO=TCP SPT=47454 DPT=9102 SEQ=1091343766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FD9220000000001030307) Dec 2 04:26:31 localhost python3.9[161269]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:26:32 localhost podman[161270]: 2025-12-02 09:26:32.066722432 +0000 UTC m=+0.066203149 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:26:32 localhost podman[161270]: 2025-12-02 09:26:32.101894229 +0000 UTC m=+0.101374926 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:26:32 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:26:32 localhost python3.9[161379]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:33 localhost python3.9[161471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:33 localhost python3.9[161563]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:34 localhost python3.9[161655]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17563 DF PROTO=TCP SPT=36876 DPT=9882 SEQ=4223304392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FE6620000000001030307) Dec 2 04:26:34 localhost python3.9[161747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:35 localhost python3.9[161839]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:35 localhost python3.9[161931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:36 localhost python3.9[162023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17564 DF PROTO=TCP SPT=36876 DPT=9882 SEQ=4223304392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FEE620000000001030307) Dec 2 04:26:37 localhost python3.9[162115]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:37 localhost python3.9[162207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:38 localhost python3.9[162299]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:39 localhost python3.9[162391]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:39 localhost python3.9[162483]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:26:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62854 DF PROTO=TCP SPT=54384 DPT=9100 SEQ=970801497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FFD220000000001030307) Dec 2 04:26:41 localhost python3.9[162575]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:26:41 localhost systemd[1]: Reloading. Dec 2 04:26:41 localhost systemd-rc-local-generator[162597]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:41 localhost systemd-sysv-generator[162603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:42 localhost python3.9[162703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4092 DF PROTO=TCP SPT=46746 DPT=9105 SEQ=2375097227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54007630000000001030307) Dec 2 04:26:43 localhost python3.9[162796]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:44 localhost python3.9[162889]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:44 localhost python3.9[162982]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:45 localhost python3.9[163075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:45 localhost python3.9[163168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44006 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54012BF0000000001030307) Dec 2 04:26:46 localhost python3.9[163261]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:48 localhost python3.9[163355]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Dec 2 04:26:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44008 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5401EE20000000001030307) Dec 2 04:26:49 localhost python3.9[163448]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:26:49 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation. Dec 2 04:26:49 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:26:49 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:26:49 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:26:50 localhost python3.9[163547]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:26:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59596 DF PROTO=TCP SPT=34650 DPT=9101 SEQ=2590249443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54029230000000001030307) Dec 2 04:26:51 localhost python3.9[163647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:26:53 localhost python3.9[163701]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:26:57 localhost podman[163722]: 2025-12-02 09:26:57.098236682 +0000 UTC m=+0.097432445 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:26:57 localhost podman[163722]: 2025-12-02 09:26:57.161908352 +0000 UTC m=+0.161104095 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:26:57 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:26:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25836 DF PROTO=TCP SPT=58862 DPT=9101 SEQ=1538546494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5403E630000000001030307) Dec 2 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44010 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5404F220000000001030307) Dec 2 04:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:27:03 localhost systemd[1]: tmp-crun.65dltK.mount: Deactivated successfully. Dec 2 04:27:03 localhost podman[163797]: 2025-12-02 09:27:03.107957352 +0000 UTC m=+0.104672198 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 04:27:03 localhost podman[163797]: 2025-12-02 09:27:03.113431501 +0000 UTC m=+0.110146307 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:27:03 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:27:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:27:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:27:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:27:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35199 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54057900000000001030307) Dec 2 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35200 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5405BA20000000001030307) Dec 2 04:27:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35201 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54063A20000000001030307) Dec 2 04:27:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30292 DF PROTO=TCP SPT=33450 DPT=9100 SEQ=3336536510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54071220000000001030307) Dec 2 04:27:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:27:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 2 04:27:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62855 DF PROTO=TCP SPT=54384 DPT=9100 SEQ=970801497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5407B220000000001030307) Dec 2 04:27:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:27:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdo Dec 2 04:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9675 DF PROTO=TCP SPT=40786 DPT=9102 SEQ=3376284197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54087EE0000000001030307) Dec 2 04:27:18 localhost kernel: SELinux: Converting 2746 SID table entries... Dec 2 04:27:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35203 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54093220000000001030307) Dec 2 04:27:18 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Dec 2 04:27:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:18 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:18 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25838 DF PROTO=TCP SPT=58862 DPT=9101 SEQ=1538546494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5409F230000000001030307) Dec 2 04:27:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28802 DF PROTO=TCP SPT=57196 DPT=9101 SEQ=654126358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540B3630000000001030307) Dec 2 04:27:28 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=19 res=1 Dec 2 04:27:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:27:28 localhost systemd[1]: tmp-crun.YDaw87.mount: Deactivated successfully. Dec 2 04:27:28 localhost podman[164913]: 2025-12-02 09:27:28.235273258 +0000 UTC m=+0.113728628 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:27:28 localhost podman[164913]: 2025-12-02 09:27:28.343074224 +0000 UTC m=+0.221529614 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 04:27:28 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:27:29 localhost kernel: SELinux: Converting 2749 SID table entries... Dec 2 04:27:29 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:29 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:29 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:29 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:29 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:29 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:29 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9679 DF PROTO=TCP SPT=40786 DPT=9102 SEQ=3376284197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540C3220000000001030307) Dec 2 04:27:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15954 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540CCC10000000001030307) Dec 2 04:27:33 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Dec 2 04:27:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:27:34 localhost podman[164947]: 2025-12-02 09:27:34.109829767 +0000 UTC m=+0.088751830 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:27:34 localhost podman[164947]: 2025-12-02 09:27:34.149893372 +0000 UTC m=+0.128815435 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:27:34 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15955 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540D0E20000000001030307) Dec 2 04:27:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15956 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540D8E20000000001030307) Dec 2 04:27:37 localhost kernel: SELinux: Converting 2749 SID table entries... Dec 2 04:27:37 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:37 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:37 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:37 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:37 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:37 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:37 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8420 DF PROTO=TCP SPT=60342 DPT=9100 SEQ=550164076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540E7220000000001030307) Dec 2 04:27:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61809 DF PROTO=TCP SPT=45892 DPT=9105 SEQ=704966943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540F1E30000000001030307) Dec 2 04:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37827 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3609529685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540FD1E0000000001030307) Dec 2 04:27:46 localhost kernel: SELinux: Converting 2749 SID table entries... Dec 2 04:27:46 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:46 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:46 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:46 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:46 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:46 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:46 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15958 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54109230000000001030307) Dec 2 04:27:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28804 DF PROTO=TCP SPT=57196 DPT=9101 SEQ=654126358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54113220000000001030307) Dec 2 04:27:55 localhost kernel: SELinux: Converting 2749 SID table entries... Dec 2 04:27:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:55 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:55 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60997 DF PROTO=TCP SPT=34648 DPT=9101 SEQ=1711180692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54128A20000000001030307) Dec 2 04:27:58 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=23 res=1 Dec 2 04:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:27:59 localhost podman[164994]: 2025-12-02 09:27:59.081976033 +0000 UTC m=+0.078487852 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 2 04:27:59 localhost podman[164994]: 2025-12-02 09:27:59.149011996 +0000 UTC m=+0.145523815 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller) Dec 2 04:27:59 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37831 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3609529685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54139220000000001030307) Dec 2 04:28:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:28:03.132 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:28:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:28:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:28:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:28:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:28:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39431 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54141F10000000001030307) Dec 2 04:28:04 localhost kernel: SELinux: Converting 2749 SID table entries... Dec 2 04:28:04 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:28:04 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:28:04 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:28:04 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:28:04 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:28:04 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:28:04 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39432 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54145E20000000001030307) Dec 2 04:28:04 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=24 res=1 Dec 2 04:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:28:04 localhost podman[165042]: 2025-12-02 09:28:04.798617685 +0000 UTC m=+0.085506979 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:28:04 localhost podman[165042]: 2025-12-02 09:28:04.808598468 +0000 UTC m=+0.095487752 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:28:04 localhost systemd[1]: Reloading. Dec 2 04:28:04 localhost systemd-rc-local-generator[165104]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:28:04 localhost systemd-sysv-generator[165107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:28:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:28:05 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:28:05 localhost systemd[1]: Reloading. Dec 2 04:28:05 localhost systemd-rc-local-generator[165172]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:28:05 localhost systemd-sysv-generator[165176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:28:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:28:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39433 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5414DE20000000001030307) Dec 2 04:28:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25494 DF PROTO=TCP SPT=35460 DPT=9100 SEQ=209492473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5415B220000000001030307) Dec 2 04:28:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8421 DF PROTO=TCP SPT=60342 DPT=9100 SEQ=550164076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54165220000000001030307) Dec 2 04:28:14 localhost kernel: SELinux: Converting 2750 SID table entries... Dec 2 04:28:14 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:28:14 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:28:14 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:28:14 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:28:14 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:28:14 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:28:14 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5692 DF PROTO=TCP SPT=44704 DPT=9102 SEQ=374549034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541724F0000000001030307) Dec 2 04:28:18 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 04:28:18 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=25 res=1 Dec 2 04:28:18 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Dec 2 04:28:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39435 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5417D2C0000000001030307) Dec 2 04:28:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60999 DF PROTO=TCP SPT=34648 DPT=9101 SEQ=1711180692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54189220000000001030307) Dec 2 04:28:25 localhost sshd[165449]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:28:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32955 DF PROTO=TCP SPT=40984 DPT=9101 SEQ=2589123676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5419DE20000000001030307) Dec 2 04:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:28:30 localhost podman[165451]: 2025-12-02 09:28:30.097012785 +0000 UTC m=+0.087452448 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller) Dec 2 04:28:30 localhost podman[165451]: 2025-12-02 09:28:30.161857788 +0000 UTC m=+0.152297511 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:28:30 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:28:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5696 DF PROTO=TCP SPT=44704 DPT=9102 SEQ=374549034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541AF220000000001030307) Dec 2 04:28:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59734 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541B7210000000001030307) Dec 2 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59735 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541BB220000000001030307) Dec 2 04:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:28:36 localhost podman[168403]: 2025-12-02 09:28:36.08976473 +0000 UTC m=+0.090008549 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 04:28:36 localhost podman[168403]: 2025-12-02 09:28:36.122909673 +0000 UTC m=+0.123153492 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:28:36 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:28:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59736 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541C3220000000001030307) Dec 2 04:28:38 localhost sshd[169858]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:28:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55659 DF PROTO=TCP SPT=37460 DPT=9100 SEQ=4060318635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541D1220000000001030307) Dec 2 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63230 DF PROTO=TCP SPT=37626 DPT=9105 SEQ=977405425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541DC220000000001030307) Dec 2 04:28:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61814 DF PROTO=TCP SPT=45892 DPT=9105 SEQ=704966943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541E7220000000001030307) Dec 2 04:28:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59738 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541F3220000000001030307) Dec 2 04:28:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32957 DF PROTO=TCP SPT=40984 DPT=9101 SEQ=2589123676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541FF220000000001030307) Dec 2 04:28:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1763 DF PROTO=TCP SPT=34954 DPT=9101 SEQ=1749529197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54213220000000001030307) Dec 2 04:28:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63233 DF PROTO=TCP SPT=37626 DPT=9105 SEQ=977405425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54215220000000001030307) Dec 2 04:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:29:01 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:29:01 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:29:01 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:29:01 localhost systemd[1]: sshd.service: Consumed 1.360s CPU time, read 32.0K from disk, written 0B to disk. Dec 2 04:29:01 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:29:01 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:29:01 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:01 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:01 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:01 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:29:01 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:29:01 localhost sshd[183018]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:29:01 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:29:01 localhost podman[183005]: 2025-12-02 09:29:01.071120592 +0000 UTC m=+0.082826494 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:29:01 localhost podman[183005]: 2025-12-02 09:29:01.111615665 +0000 UTC m=+0.123321617 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:29:01 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31274 DF PROTO=TCP SPT=57088 DPT=9102 SEQ=2967827767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54223220000000001030307) Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:29:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:29:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:29:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:29:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:29:03.135 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:29:03 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:29:03 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:29:03 localhost systemd[1]: Reloading. Dec 2 04:29:03 localhost systemd-rc-local-generator[183327]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:03 localhost systemd-sysv-generator[183330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:03 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:29:03 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8158 DF PROTO=TCP SPT=51746 DPT=9882 SEQ=2012140012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54230620000000001030307) Dec 2 04:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:29:06 localhost podman[187438]: 2025-12-02 09:29:06.326552503 +0000 UTC m=+0.077939352 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:29:06 localhost podman[187438]: 2025-12-02 09:29:06.362901017 +0000 UTC m=+0.114287936 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:29:06 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:29:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8159 DF PROTO=TCP SPT=51746 DPT=9882 SEQ=2012140012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54238620000000001030307) Dec 2 04:29:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49288 DF PROTO=TCP SPT=60504 DPT=9100 SEQ=3567513200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54245220000000001030307) Dec 2 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16953 DF PROTO=TCP SPT=55702 DPT=9105 SEQ=2484044217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54251620000000001030307) Dec 2 04:29:15 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:29:15 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:29:15 localhost systemd[1]: man-db-cache-update.service: Consumed 14.896s CPU time. Dec 2 04:29:15 localhost systemd[1]: run-re5fb30a729d24dcba4d6ddc90d4aef33.service: Deactivated successfully. Dec 2 04:29:15 localhost systemd[1]: run-r80df02c78fb243f78c431720685a7ef1.service: Deactivated successfully. Dec 2 04:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60589 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5425CAF0000000001030307) Dec 2 04:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60591 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54268A20000000001030307) Dec 2 04:29:19 localhost python3.9[192009]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:19 localhost systemd[1]: Reloading. Dec 2 04:29:19 localhost systemd-rc-local-generator[192033]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:19 localhost systemd-sysv-generator[192039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost python3.9[192158]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:20 localhost systemd[1]: Reloading. Dec 2 04:29:20 localhost systemd-rc-local-generator[192186]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:20 localhost systemd-sysv-generator[192190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost python3.9[192306]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1765 DF PROTO=TCP SPT=34954 DPT=9101 SEQ=1749529197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54273230000000001030307) Dec 2 04:29:21 localhost systemd[1]: Reloading. Dec 2 04:29:21 localhost systemd-rc-local-generator[192331]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:21 localhost systemd-sysv-generator[192336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost python3.9[192455]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:22 localhost systemd[1]: Reloading. Dec 2 04:29:22 localhost systemd-rc-local-generator[192481]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:22 localhost systemd-sysv-generator[192485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost python3.9[192604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:25 localhost systemd[1]: Reloading. Dec 2 04:29:25 localhost systemd-rc-local-generator[192630]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:25 localhost systemd-sysv-generator[192635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost python3.9[192752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:26 localhost systemd[1]: Reloading. Dec 2 04:29:26 localhost systemd-rc-local-generator[192783]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:26 localhost systemd-sysv-generator[192786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37702 DF PROTO=TCP SPT=38500 DPT=9101 SEQ=3609384108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54288220000000001030307) Dec 2 04:29:27 localhost python3.9[192901]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:27 localhost systemd[1]: Reloading. Dec 2 04:29:27 localhost systemd-rc-local-generator[192928]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:27 localhost systemd-sysv-generator[192935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:28 localhost python3.9[193050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:29 localhost python3.9[193163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:29 localhost systemd[1]: Reloading. Dec 2 04:29:29 localhost systemd-rc-local-generator[193195]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:29 localhost systemd-sysv-generator[193198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60593 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54299220000000001030307) Dec 2 04:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:29:32 localhost systemd[1]: tmp-crun.7T7tzh.mount: Deactivated successfully. Dec 2 04:29:32 localhost podman[193220]: 2025-12-02 09:29:32.110473962 +0000 UTC m=+0.103552196 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller) Dec 2 04:29:32 localhost podman[193220]: 2025-12-02 09:29:32.199879953 +0000 UTC m=+0.192958237 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:29:32 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:29:32 localhost python3.9[193337]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=98 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542A1810000000001030307) Dec 2 04:29:33 localhost systemd[1]: Reloading. Dec 2 04:29:33 localhost systemd-sysv-generator[193370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:33 localhost systemd-rc-local-generator[193365]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=99 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542A5A20000000001030307) Dec 2 04:29:34 localhost python3.9[193486]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:36 localhost python3.9[193599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:29:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=100 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542ADA20000000001030307) Dec 2 04:29:36 localhost podman[193601]: 2025-12-02 09:29:36.759856728 +0000 UTC m=+0.074482812 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:29:36 localhost podman[193601]: 2025-12-02 09:29:36.798129535 +0000 UTC m=+0.112755679 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:29:36 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:29:37 localhost python3.9[193730]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:38 localhost python3.9[193843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:40 localhost python3.9[193956]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1952 DF PROTO=TCP SPT=56382 DPT=9100 SEQ=952668016 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542BB220000000001030307) Dec 2 04:29:40 localhost python3.9[194069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:42 localhost python3.9[194182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3370 DF PROTO=TCP SPT=54862 DPT=9105 SEQ=3158011964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542C6A30000000001030307) Dec 2 04:29:44 localhost python3.9[194295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:44 localhost python3.9[194408]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57407 DF PROTO=TCP SPT=43958 DPT=9102 SEQ=3185712321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542D1DE0000000001030307) Dec 2 04:29:46 localhost python3.9[194521]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:47 localhost python3.9[194634]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:48 localhost python3.9[194747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=102 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542DD220000000001030307) Dec 2 04:29:49 localhost python3.9[194860]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:49 localhost python3.9[194973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37704 DF PROTO=TCP SPT=38500 DPT=9101 SEQ=3609384108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542E9220000000001030307) Dec 2 04:29:51 localhost python3.9[195086]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:53 localhost python3.9[195196]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:53 localhost python3.9[195306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:54 localhost python3.9[195416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:55 localhost python3.9[195526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:56 localhost python3.9[195636]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:56 localhost python3.9[195746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26777 DF PROTO=TCP SPT=35554 DPT=9101 SEQ=1272628940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542FD620000000001030307) Dec 2 04:29:57 localhost python3.9[195836]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667796.301507-1646-189306395908923/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:29:58 localhost python3.9[195946]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:58 localhost python3.9[196036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667797.7130475-1646-180303503220734/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:29:59 localhost python3.9[196146]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:59 localhost python3.9[196236]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667798.9005985-1646-177896051673428/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:00 localhost python3.9[196346]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:01 localhost python3.9[196436]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667800.0311012-1646-91566603159161/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57411 DF PROTO=TCP SPT=43958 DPT=9102 SEQ=3185712321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5430D220000000001030307) Dec 2 04:30:01 localhost python3.9[196546]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:02 localhost python3.9[196636]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667801.2365613-1646-21257265897604/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:30:02 localhost podman[196747]: 2025-12-02 09:30:02.832124724 +0000 UTC m=+0.087484713 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:30:02 localhost podman[196747]: 2025-12-02 09:30:02.87265808 +0000 UTC m=+0.128018069 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 04:30:02 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:30:02 localhost python3.9[196746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:30:03.137 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:30:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:30:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:30:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:30:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:30:03 localhost python3.9[196861]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667802.4253078-1646-76431641510727/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12897 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54316B10000000001030307) Dec 2 04:30:04 localhost python3.9[196971]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12898 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5431AA30000000001030307) Dec 2 04:30:04 localhost python3.9[197059]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667803.5914452-1646-38514141710391/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:05 localhost python3.9[197169]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:06 localhost python3.9[197259]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667805.0074477-1646-116104007768948/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12899 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54322A30000000001030307) Dec 2 04:30:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:30:07 localhost podman[197277]: 2025-12-02 09:30:07.087541011 +0000 UTC m=+0.087807192 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:30:07 localhost podman[197277]: 2025-12-02 09:30:07.098768246 +0000 UTC m=+0.099034427 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:30:07 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:30:07 localhost python3.9[197387]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:08 localhost python3.9[197497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:08 localhost python3.9[197607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:09 localhost python3.9[197717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:09 localhost python3.9[197827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36923 DF PROTO=TCP SPT=41554 DPT=9100 SEQ=1344575929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54331220000000001030307) Dec 2 04:30:10 localhost python3.9[197972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:10 localhost sshd[198065]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:30:10 localhost python3.9[198140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:11 localhost python3.9[198281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:12 localhost python3.9[198393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:12 localhost python3.9[198521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9798 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=3808598112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5433BE20000000001030307) Dec 2 04:30:13 localhost python3.9[198631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:13 localhost python3.9[198741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:14 localhost python3.9[198851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:15 localhost python3.9[198961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:15 localhost python3.9[199071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64295 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543470E0000000001030307) Dec 2 04:30:18 localhost python3.9[199181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:18 localhost python3.9[199269]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667818.0421448-2308-54985032620514/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64297 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54353220000000001030307) Dec 2 04:30:19 localhost python3.9[199379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:20 localhost python3.9[199467]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667819.128598-2308-115298099314289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:20 localhost python3.9[199577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:21 localhost python3.9[199665]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667820.209472-2308-247384621504066/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26779 DF PROTO=TCP SPT=35554 DPT=9101 SEQ=1272628940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5435D220000000001030307) Dec 2 04:30:21 localhost python3.9[199775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:22 localhost python3.9[199863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667821.2708068-2308-221379035764601/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:23 localhost python3.9[199973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:23 localhost python3.9[200061]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667822.6093314-2308-109205659591544/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:24 localhost python3.9[200171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:24 localhost python3.9[200259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667823.7027185-2308-35672199100509/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:25 localhost python3.9[200369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:25 localhost python3.9[200457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667824.8461573-2308-261627378298264/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:26 localhost python3.9[200567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:27 localhost python3.9[200655]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667826.1349342-2308-39897930903925/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1797 DF PROTO=TCP SPT=57938 DPT=9101 SEQ=1718877674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54372A20000000001030307) Dec 2 04:30:27 localhost python3.9[200765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:28 localhost python3.9[200853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667827.2282357-2308-68589797032285/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:29 localhost python3.9[200963]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:29 localhost python3.9[201051]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667828.5740378-2308-4511502889548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:30 localhost python3.9[201161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:30 localhost python3.9[201249]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667829.6379445-2308-204901045801869/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:31 localhost python3.9[201359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64299 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54383220000000001030307) Dec 2 04:30:31 localhost python3.9[201447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667830.6486254-2308-97054831665936/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:32 localhost python3.9[201557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:32 localhost python3.9[201645]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667831.7083123-2308-87635549735233/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:30:33 localhost podman[201754]: 2025-12-02 09:30:33.095545263 +0000 UTC m=+0.089611866 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:30:33 localhost systemd[1]: tmp-crun.mMsXLL.mount: Deactivated successfully. Dec 2 04:30:33 localhost podman[201754]: 2025-12-02 09:30:33.133958024 +0000 UTC m=+0.128024617 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:30:33 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:30:33 localhost python3.9[201761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3250 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5438BE00000000001030307) Dec 2 04:30:33 localhost python3.9[201869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667832.7533855-2308-103660482533503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:34 localhost python3.9[201977]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3251 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5438FE20000000001030307) Dec 2 04:30:35 localhost python3.9[202090]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 2 04:30:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3252 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54397E20000000001030307) Dec 2 04:30:36 localhost python3.9[202200]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:36 localhost systemd[1]: Reloading. Dec 2 04:30:36 localhost systemd-rc-local-generator[202218]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:36 localhost systemd-sysv-generator[202225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:37 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:37 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:37 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:30:37 localhost podman[202237]: 2025-12-02 09:30:37.229915417 +0000 UTC m=+0.079265103 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:30:37 localhost podman[202237]: 2025-12-02 09:30:37.240998632 +0000 UTC m=+0.090348338 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:30:37 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:30:38 localhost systemd[1]: Starting libvirt logging daemon socket... Dec 2 04:30:38 localhost systemd[1]: Listening on libvirt logging daemon socket. Dec 2 04:30:38 localhost systemd[1]: Starting libvirt logging daemon admin socket... Dec 2 04:30:38 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Dec 2 04:30:38 localhost systemd[1]: Starting libvirt logging daemon... Dec 2 04:30:38 localhost systemd[1]: Started libvirt logging daemon. Dec 2 04:30:39 localhost python3.9[202367]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:39 localhost systemd[1]: Reloading. Dec 2 04:30:39 localhost systemd-rc-local-generator[202394]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:39 localhost systemd-sysv-generator[202398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: Starting libvirt nodedev daemon socket... Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Dec 2 04:30:39 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Dec 2 04:30:39 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Dec 2 04:30:39 localhost systemd[1]: Started libvirt nodedev daemon. Dec 2 04:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31962 DF PROTO=TCP SPT=32846 DPT=9100 SEQ=3270466836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543A5220000000001030307) Dec 2 04:30:40 localhost python3.9[202542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:40 localhost systemd[1]: Reloading. Dec 2 04:30:40 localhost systemd-sysv-generator[202574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:40 localhost systemd-rc-local-generator[202570]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 2 04:30:40 localhost systemd[1]: Starting libvirt proxy daemon socket... Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt proxy daemon socket. Dec 2 04:30:40 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Dec 2 04:30:40 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Dec 2 04:30:40 localhost systemd[1]: Started libvirt proxy daemon. Dec 2 04:30:40 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 2 04:30:41 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Dec 2 04:30:41 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Dec 2 04:30:41 localhost python3.9[202722]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:41 localhost systemd[1]: Reloading. Dec 2 04:30:41 localhost systemd-rc-local-generator[202748]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:41 localhost systemd-sysv-generator[202753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt locking daemon socket. Dec 2 04:30:41 localhost systemd[1]: Starting libvirt QEMU daemon socket... Dec 2 04:30:41 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 2 04:30:41 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Dec 2 04:30:41 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Dec 2 04:30:41 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Dec 2 04:30:41 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Dec 2 04:30:41 localhost systemd[1]: Started libvirt QEMU daemon. Dec 2 04:30:42 localhost setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8ab41f9-e430-4d93-8a79-59719374bbe5 Dec 2 04:30:42 localhost setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 2 04:30:42 localhost setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8ab41f9-e430-4d93-8a79-59719374bbe5 Dec 2 04:30:42 localhost setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 2 04:30:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36924 DF PROTO=TCP SPT=41554 DPT=9100 SEQ=1344575929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543AF230000000001030307) Dec 2 04:30:43 localhost python3.9[202899]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:43 localhost systemd[1]: Reloading. Dec 2 04:30:43 localhost systemd-sysv-generator[202930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:43 localhost systemd-rc-local-generator[202926]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:43 localhost systemd[1]: Starting libvirt secret daemon socket... Dec 2 04:30:43 localhost systemd[1]: Listening on libvirt secret daemon socket. Dec 2 04:30:43 localhost systemd[1]: Starting libvirt secret daemon admin socket... Dec 2 04:30:43 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Dec 2 04:30:43 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Dec 2 04:30:43 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Dec 2 04:30:43 localhost systemd[1]: Started libvirt secret daemon. Dec 2 04:30:44 localhost sshd[203032]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:30:45 localhost python3.9[203072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:45 localhost python3.9[203182]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21332 DF PROTO=TCP SPT=45408 DPT=9102 SEQ=2038881755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543BC3E0000000001030307) Dec 2 04:30:46 localhost python3.9[203292]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:47 localhost python3.9[203404]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:30:48 localhost python3.9[203512]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3254 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543C7230000000001030307) Dec 2 04:30:49 localhost python3.9[203598]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667848.0622027-3173-278766117252849/.source.xml follow=False _original_basename=secret.xml.j2 checksum=45e14b3898e47796a04e3213d8ff716cad2ef6d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:49 localhost python3.9[203708]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c7c8e171-a193-56fb-95fa-8879fcfa7074#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:50 localhost python3.9[203828]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1799 DF PROTO=TCP SPT=57938 DPT=9101 SEQ=1718877674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543D3220000000001030307) Dec 2 04:30:52 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Dec 2 04:30:52 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 2 04:30:54 localhost python3.9[204165]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:55 localhost python3.9[204275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:55 localhost python3.9[204363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667854.6742232-3338-47073940731427/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26533 DF PROTO=TCP SPT=58322 DPT=9101 SEQ=3739041820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543E7E20000000001030307) Dec 2 04:30:57 localhost python3.9[204473]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:57 localhost python3.9[204583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:58 localhost python3.9[204640]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:59 localhost python3.9[204750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:59 localhost python3.9[204807]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.p3ozdzjc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:00 localhost python3.9[204917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:00 localhost python3.9[204974]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:01 localhost python3.9[205084]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21336 DF PROTO=TCP SPT=45408 DPT=9102 SEQ=2038881755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543F9230000000001030307) Dec 2 04:31:02 localhost python3[205195]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:31:02 localhost python3.9[205305]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:31:03.137 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:31:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:31:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:31:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:31:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:31:03 localhost python3.9[205362]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60346 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54401100000000001030307) Dec 2 04:31:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:31:03 localhost systemd[1]: tmp-crun.KqJqzf.mount: Deactivated successfully. Dec 2 04:31:03 localhost podman[205472]: 2025-12-02 09:31:03.984216539 +0000 UTC m=+0.096163846 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:31:04 localhost podman[205472]: 2025-12-02 09:31:04.069263707 +0000 UTC m=+0.181210994 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:31:04 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:31:04 localhost python3.9[205473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:04 localhost python3.9[205552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60347 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54405220000000001030307) Dec 2 04:31:05 localhost python3.9[205662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:05 localhost python3.9[205719]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:06 localhost python3.9[205829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60348 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5440D230000000001030307) Dec 2 04:31:06 localhost python3.9[205886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:31:07 localhost systemd[1]: tmp-crun.9BbOeb.mount: Deactivated successfully. Dec 2 04:31:07 localhost podman[205997]: 2025-12-02 09:31:07.859782233 +0000 UTC m=+0.091906009 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 2 04:31:07 localhost podman[205997]: 2025-12-02 09:31:07.889953451 +0000 UTC m=+0.122077197 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 04:31:07 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:31:07 localhost python3.9[205996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:08 localhost python3.9[206105]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667867.3871915-3713-121318721951660/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:09 localhost python3.9[206215]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:09 localhost python3.9[206325]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38868 DF PROTO=TCP SPT=37588 DPT=9100 SEQ=1764008649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5441B220000000001030307) Dec 2 04:31:11 localhost python3.9[206438]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:11 localhost python3.9[206548]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:12 localhost python3.9[206710]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46105 DF PROTO=TCP SPT=46988 DPT=9105 SEQ=2120226542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54426220000000001030307) Dec 2 04:31:13 localhost python3.9[206839]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:14 localhost python3.9[206970]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:15 localhost python3.9[207080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:15 localhost python3.9[207168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667874.5200412-3929-262145964523067/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9803 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=3808598112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54431230000000001030307) Dec 2 04:31:16 localhost python3.9[207278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:16 localhost python3.9[207366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667875.9209073-3974-35492129011824/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:17 localhost python3.9[207476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:17 localhost sshd[207549]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:31:18 localhost python3.9[207566]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667877.1514735-4019-221420938547236/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:18 localhost python3.9[207676]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:18 localhost systemd[1]: Reloading. Dec 2 04:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60350 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5443D220000000001030307) Dec 2 04:31:19 localhost systemd-rc-local-generator[207703]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:19 localhost systemd-sysv-generator[207706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: Reached target edpm_libvirt.target. Dec 2 04:31:21 localhost python3.9[207825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:31:21 localhost systemd[1]: Reloading. Dec 2 04:31:21 localhost systemd-sysv-generator[207853]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:21 localhost systemd-rc-local-generator[207850]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: Reloading. Dec 2 04:31:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26535 DF PROTO=TCP SPT=58322 DPT=9101 SEQ=3739041820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54449220000000001030307) Dec 2 04:31:22 localhost systemd-rc-local-generator[207887]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:22 localhost systemd-sysv-generator[207893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: session-52.scope: Deactivated successfully. Dec 2 04:31:23 localhost systemd[1]: session-52.scope: Consumed 3min 42.853s CPU time. Dec 2 04:31:23 localhost systemd-logind[760]: Session 52 logged out. Waiting for processes to exit. Dec 2 04:31:23 localhost systemd-logind[760]: Removed session 52. Dec 2 04:31:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51994 DF PROTO=TCP SPT=42566 DPT=9101 SEQ=4124349749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5445CE30000000001030307) Dec 2 04:31:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46108 DF PROTO=TCP SPT=46988 DPT=9105 SEQ=2120226542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5445F220000000001030307) Dec 2 04:31:29 localhost sshd[207918]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:31:29 localhost systemd-logind[760]: New session 53 of user zuul. Dec 2 04:31:29 localhost systemd[1]: Started Session 53 of User zuul. Dec 2 04:31:30 localhost python3.9[208029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:31:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35279 DF PROTO=TCP SPT=44554 DPT=9102 SEQ=2200606831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5446D220000000001030307) Dec 2 04:31:31 localhost python3.9[208141]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:31:31 localhost network[208158]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:31:31 localhost network[208159]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:31:31 localhost network[208160]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:31:34 localhost systemd[1]: tmp-crun.tbHTgM.mount: Deactivated successfully. Dec 2 04:31:34 localhost podman[208210]: 2025-12-02 09:31:34.45500385 +0000 UTC m=+0.109136466 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:31:34 localhost podman[208210]: 2025-12-02 09:31:34.516893481 +0000 UTC m=+0.171026037 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:31:34 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55003 DF PROTO=TCP SPT=42272 DPT=9882 SEQ=1824040291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5447A620000000001030307) Dec 2 04:31:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55004 DF PROTO=TCP SPT=42272 DPT=9882 SEQ=1824040291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54482620000000001030307) Dec 2 04:31:37 localhost python3.9[208418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:31:38 localhost podman[208423]: 2025-12-02 09:31:38.064893697 +0000 UTC m=+0.070780004 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:31:38 localhost podman[208423]: 2025-12-02 09:31:38.100908918 +0000 UTC m=+0.106795185 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 04:31:38 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:31:39 localhost python3.9[208499]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:31:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20895 DF PROTO=TCP SPT=52202 DPT=9100 SEQ=2480210252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5448F230000000001030307) Dec 2 04:31:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39303 DF PROTO=TCP SPT=58708 DPT=9105 SEQ=2019460385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5449B620000000001030307) Dec 2 04:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16788 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544A69E0000000001030307) Dec 2 04:31:47 localhost python3.9[208611]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16790 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544B2A20000000001030307) Dec 2 04:31:49 localhost python3.9[208723]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:50 localhost python3.9[208833]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:51 localhost python3.9[208944]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51996 DF PROTO=TCP SPT=42566 DPT=9101 SEQ=4124349749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544BD220000000001030307) Dec 2 04:31:52 localhost python3.9[209055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:52 localhost python3.9[209166]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:53 localhost python3.9[209278]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:55 localhost python3.9[209388]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:55 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Dec 2 04:31:56 localhost python3.9[209502]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:56 localhost systemd[1]: Reloading. Dec 2 04:31:56 localhost systemd-rc-local-generator[209525]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:56 localhost systemd-sysv-generator[209529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Dec 2 04:31:56 localhost systemd[1]: Starting Open-iSCSI... Dec 2 04:31:56 localhost iscsid[209544]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 2 04:31:56 localhost iscsid[209544]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 2 04:31:56 localhost iscsid[209544]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 2 04:31:56 localhost iscsid[209544]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 2 04:31:56 localhost iscsid[209544]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 2 04:31:56 localhost iscsid[209544]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 2 04:31:56 localhost iscsid[209544]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Dec 2 04:31:56 localhost systemd[1]: Started Open-iSCSI. Dec 2 04:31:56 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Dec 2 04:31:56 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Dec 2 04:31:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46423 DF PROTO=TCP SPT=32964 DPT=9101 SEQ=2095925668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544D2220000000001030307) Dec 2 04:31:58 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 2 04:31:58 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 2 04:31:58 localhost python3.9[209656]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:31:58 localhost network[209685]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:31:58 localhost network[209686]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:31:58 localhost network[209687]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:31:58 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8 Dec 2 04:31:59 localhost setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:32:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16792 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544E3220000000001030307) Dec 2 04:32:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:32:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:32:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:32:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:32:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:32:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:32:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25796 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544EB710000000001030307) Dec 2 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25797 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544EF620000000001030307) Dec 2 04:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:32:05 localhost podman[209830]: 2025-12-02 09:32:05.087301879 +0000 UTC m=+0.084886379 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Dec 2 04:32:05 localhost podman[209830]: 2025-12-02 09:32:05.150117866 +0000 UTC m=+0.147702316 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:32:05 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:32:06 localhost python3.9[209947]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:32:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25798 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544F7620000000001030307) Dec 2 04:32:06 localhost python3.9[210057]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 2 04:32:07 localhost python3.9[210171]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:08 localhost python3.9[210259]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667927.2019618-458-94511307963775/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:32:09 localhost podman[210369]: 2025-12-02 09:32:09.038864673 +0000 UTC m=+0.086106097 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:32:09 localhost podman[210369]: 2025-12-02 09:32:09.072886757 +0000 UTC m=+0.120128161 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:32:09 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:32:09 localhost python3.9[210370]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:09 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Dec 2 04:32:09 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 2 04:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49094 DF PROTO=TCP SPT=48774 DPT=9100 SEQ=2725460907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54505220000000001030307) Dec 2 04:32:10 localhost python3.9[210497]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:32:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:32:10 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:32:10 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:32:10 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:32:10 localhost systemd-modules-load[210501]: Module 'msr' is built in Dec 2 04:32:10 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:32:11 localhost python3.9[210611]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:12 localhost python3.9[210721]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:12 localhost python3.9[210831]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50232 DF PROTO=TCP SPT=54402 DPT=9105 SEQ=2939221781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54510A20000000001030307) Dec 2 04:32:13 localhost python3.9[210941]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:13 localhost python3.9[211033]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667932.9645755-632-128685646380296/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:14 localhost python3.9[211217]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:32:14 localhost podman[211251]: 2025-12-02 09:32:14.800065507 +0000 UTC m=+0.105299489 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:32:14 localhost podman[211251]: 2025-12-02 09:32:14.931854699 +0000 UTC m=+0.237088711 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True) Dec 2 04:32:15 localhost python3.9[211425]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58427 DF PROTO=TCP SPT=33362 DPT=9102 SEQ=719211157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5451BCE0000000001030307) Dec 2 04:32:16 localhost python3.9[211601]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:16 localhost python3.9[211734]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:18 localhost python3.9[211844]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25800 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54527220000000001030307) Dec 2 04:32:18 localhost python3.9[211954]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:19 localhost python3.9[212064]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:20 localhost python3.9[212174]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:21 localhost python3.9[212284]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:21 localhost python3.9[212396]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46425 DF PROTO=TCP SPT=32964 DPT=9101 SEQ=2095925668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54533230000000001030307) Dec 2 04:32:22 localhost python3.9[212506]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:22 localhost sshd[212524]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:32:23 localhost python3.9[212618]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:23 localhost python3.9[212675]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:24 localhost python3.9[212785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:24 localhost python3.9[212842]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:25 localhost python3.9[212952]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:26 localhost python3.9[213062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:26 localhost python3.9[213119]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12445 DF PROTO=TCP SPT=52094 DPT=9101 SEQ=362888855 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54547630000000001030307) Dec 2 04:32:27 localhost python3.9[213229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:28 localhost python3.9[213286]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:29 localhost python3.9[213396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:29 localhost systemd[1]: Reloading. Dec 2 04:32:29 localhost systemd-rc-local-generator[213423]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:29 localhost systemd-sysv-generator[213427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58431 DF PROTO=TCP SPT=33362 DPT=9102 SEQ=719211157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54557220000000001030307) Dec 2 04:32:31 localhost python3.9[213544]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:31 localhost python3.9[213601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:32 localhost python3.9[213711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:32 localhost python3.9[213768]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3443 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54560A10000000001030307) Dec 2 04:32:33 localhost python3.9[213878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:33 localhost systemd[1]: Reloading. Dec 2 04:32:33 localhost systemd-rc-local-generator[213904]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:33 localhost systemd-sysv-generator[213907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost systemd[1]: Starting Create netns directory... Dec 2 04:32:34 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:32:34 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:32:34 localhost systemd[1]: Finished Create netns directory. Dec 2 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3444 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54564A20000000001030307) Dec 2 04:32:35 localhost python3.9[214029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:32:35 localhost podman[214140]: 2025-12-02 09:32:35.709203274 +0000 UTC m=+0.088887335 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:32:35 localhost podman[214140]: 2025-12-02 09:32:35.751010195 +0000 UTC m=+0.130694316 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller) Dec 2 04:32:35 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:32:35 localhost python3.9[214139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:36 localhost python3.9[214251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667955.2911882-1253-207524145104927/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3445 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5456CA20000000001030307) Dec 2 04:32:37 localhost python3.9[214361]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:38 localhost python3.9[214471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:38 localhost python3.9[214559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667957.572149-1327-140353524971938/.source.json _original_basename=.9fxn10en follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:32:39 localhost systemd[1]: tmp-crun.EvvGOp.mount: Deactivated successfully. Dec 2 04:32:39 localhost podman[214670]: 2025-12-02 09:32:39.245167822 +0000 UTC m=+0.099219191 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:32:39 localhost podman[214670]: 2025-12-02 09:32:39.252779945 +0000 UTC m=+0.106831214 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent) Dec 2 04:32:39 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:32:39 localhost python3.9[214669]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:39 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Dec 2 04:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25423 DF PROTO=TCP SPT=57064 DPT=9100 SEQ=3621728756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5457B220000000001030307) Dec 2 04:32:40 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Dec 2 04:32:41 localhost python3.9[214998]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 2 04:32:42 localhost python3.9[215108]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:32:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42173 DF PROTO=TCP SPT=39690 DPT=9105 SEQ=325339395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54585A20000000001030307) Dec 2 04:32:43 localhost python3.9[215218]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65288 DF PROTO=TCP SPT=35386 DPT=9102 SEQ=1053022010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54590FF0000000001030307) Dec 2 04:32:47 localhost python3[215354]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:32:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3447 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5459D220000000001030307) Dec 2 04:32:50 localhost podman[215367]: 2025-12-02 09:32:48.102437484 +0000 UTC m=+0.047375703 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:50 localhost podman[215415]: Dec 2 04:32:50 localhost podman[215415]: 2025-12-02 09:32:50.325301807 +0000 UTC m=+0.074368480 container create 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:32:50 localhost podman[215415]: 2025-12-02 09:32:50.286712554 +0000 UTC m=+0.035779287 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:50 localhost python3[215354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12447 DF PROTO=TCP SPT=52094 DPT=9101 SEQ=362888855 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545A7220000000001030307) Dec 2 04:32:51 localhost python3.9[215563]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:52 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 2 04:32:52 localhost systemd[1]: virtqemud.service: Deactivated successfully. Dec 2 04:32:52 localhost python3.9[215677]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:53 localhost sshd[215678]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:32:53 localhost python3.9[215734]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:54 localhost python3.9[215843]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667973.8296232-1593-95182312249303/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:55 localhost python3.9[215898]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:32:55 localhost systemd[1]: Reloading. Dec 2 04:32:55 localhost systemd-rc-local-generator[215923]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:55 localhost systemd-sysv-generator[215927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost python3.9[215988]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:56 localhost systemd[1]: Reloading. Dec 2 04:32:56 localhost systemd-rc-local-generator[216017]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:56 localhost systemd-sysv-generator[216021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: Starting multipathd container... Dec 2 04:32:56 localhost systemd[1]: Started libcrun container. Dec 2 04:32:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:32:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:32:57 localhost podman[216030]: 2025-12-02 09:32:57.088558235 +0000 UTC m=+0.314440526 container init 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:32:57 localhost multipathd[216044]: + sudo -E kolla_set_configs Dec 2 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:32:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60629 DF PROTO=TCP SPT=44204 DPT=9101 SEQ=328230410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545BCA20000000001030307) Dec 2 04:32:57 localhost multipathd[216044]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:32:57 localhost multipathd[216044]: INFO:__main__:Validating config file Dec 2 04:32:57 localhost multipathd[216044]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:32:57 localhost multipathd[216044]: INFO:__main__:Writing out command to execute Dec 2 04:32:57 localhost multipathd[216044]: ++ cat /run_command Dec 2 04:32:57 localhost multipathd[216044]: + CMD='/usr/sbin/multipathd -d' Dec 2 04:32:57 localhost multipathd[216044]: + ARGS= Dec 2 04:32:57 localhost multipathd[216044]: + sudo kolla_copy_cacerts Dec 2 04:32:57 localhost multipathd[216044]: + [[ ! -n '' ]] Dec 2 04:32:57 localhost multipathd[216044]: + . kolla_extend_start Dec 2 04:32:57 localhost multipathd[216044]: Running command: '/usr/sbin/multipathd -d' Dec 2 04:32:57 localhost multipathd[216044]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 2 04:32:57 localhost multipathd[216044]: + umask 0022 Dec 2 04:32:57 localhost multipathd[216044]: + exec /usr/sbin/multipathd -d Dec 2 04:32:57 localhost multipathd[216044]: 10141.453365 | --------start up-------- Dec 2 04:32:57 localhost multipathd[216044]: 10141.453383 | read /etc/multipath.conf Dec 2 04:32:57 localhost podman[216030]: 2025-12-02 09:32:57.21761793 +0000 UTC m=+0.443500191 container start 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:32:57 localhost podman[216030]: multipathd Dec 2 04:32:57 localhost multipathd[216044]: 10141.457354 | path checkers start up Dec 2 04:32:57 localhost systemd[1]: Started multipathd container. Dec 2 04:32:57 localhost podman[216053]: 2025-12-02 09:32:57.290830293 +0000 UTC m=+0.154821685 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:32:57 localhost podman[216053]: 2025-12-02 09:32:57.302762108 +0000 UTC m=+0.166753470 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:32:57 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:32:57 localhost python3.9[216190]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:58 localhost python3.9[216302]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:32:59 localhost python3.9[216425]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:32:59 localhost systemd[1]: Stopping multipathd container... Dec 2 04:32:59 localhost multipathd[216044]: 10143.946184 | exit (signal) Dec 2 04:32:59 localhost multipathd[216044]: 10143.946879 | --------shut down------- Dec 2 04:32:59 localhost systemd[1]: libpod-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully. Dec 2 04:32:59 localhost podman[216429]: 2025-12-02 09:32:59.749210322 +0000 UTC m=+0.109450944 container died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:32:59 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.timer: Deactivated successfully. Dec 2 04:32:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e-userdata-shm.mount: Deactivated successfully. Dec 2 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df-merged.mount: Deactivated successfully. Dec 2 04:33:00 localhost podman[216429]: 2025-12-02 09:33:00.010180439 +0000 UTC m=+0.370421031 container cleanup 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:33:00 localhost podman[216429]: multipathd Dec 2 04:33:00 localhost podman[216457]: 2025-12-02 09:33:00.123833311 +0000 UTC m=+0.074802593 container cleanup 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 04:33:00 localhost podman[216457]: multipathd Dec 2 04:33:00 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Dec 2 04:33:00 localhost systemd[1]: Stopped multipathd container. Dec 2 04:33:00 localhost systemd[1]: Starting multipathd container... Dec 2 04:33:00 localhost systemd[1]: Started libcrun container. Dec 2 04:33:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:33:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:33:00 localhost podman[216468]: 2025-12-02 09:33:00.309268574 +0000 UTC m=+0.151389721 container init 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:33:00 localhost multipathd[216483]: + sudo -E kolla_set_configs Dec 2 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:33:00 localhost podman[216468]: 2025-12-02 09:33:00.360257256 +0000 UTC m=+0.202378373 container start 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125) Dec 2 04:33:00 localhost podman[216468]: multipathd Dec 2 04:33:00 localhost systemd[1]: Started multipathd container. Dec 2 04:33:00 localhost multipathd[216483]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:33:00 localhost multipathd[216483]: INFO:__main__:Validating config file Dec 2 04:33:00 localhost multipathd[216483]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:33:00 localhost multipathd[216483]: INFO:__main__:Writing out command to execute Dec 2 04:33:00 localhost multipathd[216483]: ++ cat /run_command Dec 2 04:33:00 localhost multipathd[216483]: + CMD='/usr/sbin/multipathd -d' Dec 2 04:33:00 localhost multipathd[216483]: + ARGS= Dec 2 04:33:00 localhost multipathd[216483]: + sudo kolla_copy_cacerts Dec 2 04:33:00 localhost multipathd[216483]: + [[ ! -n '' ]] Dec 2 04:33:00 localhost multipathd[216483]: + . kolla_extend_start Dec 2 04:33:00 localhost multipathd[216483]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 2 04:33:00 localhost multipathd[216483]: Running command: '/usr/sbin/multipathd -d' Dec 2 04:33:00 localhost multipathd[216483]: + umask 0022 Dec 2 04:33:00 localhost multipathd[216483]: + exec /usr/sbin/multipathd -d Dec 2 04:33:00 localhost multipathd[216483]: 10144.700198 | --------start up-------- Dec 2 04:33:00 localhost multipathd[216483]: 10144.700223 | read /etc/multipath.conf Dec 2 04:33:00 localhost podman[216492]: 2025-12-02 09:33:00.46224285 +0000 UTC m=+0.099120677 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true) Dec 2 04:33:00 localhost multipathd[216483]: 10144.705723 | path checkers start up Dec 2 04:33:00 localhost podman[216492]: 2025-12-02 09:33:00.501478243 +0000 UTC m=+0.138356120 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 04:33:00 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:33:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65292 DF PROTO=TCP SPT=35386 DPT=9102 SEQ=1053022010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545CD220000000001030307) Dec 2 04:33:01 localhost python3.9[216631]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:33:03.140 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:33:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:33:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:33:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:33:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:33:03 localhost python3.9[216741]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:33:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64203 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545D5D10000000001030307) Dec 2 04:33:03 localhost python3.9[216851]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 2 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64204 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545D9E20000000001030307) Dec 2 04:33:04 localhost python3.9[216969]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:33:05 localhost python3.9[217057]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667984.2749496-1832-2724365468302/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:33:06 localhost systemd[1]: tmp-crun.leCWB1.mount: Deactivated successfully. Dec 2 04:33:06 localhost podman[217168]: 2025-12-02 09:33:06.016253516 +0000 UTC m=+0.104344679 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:33:06 localhost podman[217168]: 2025-12-02 09:33:06.066353521 +0000 UTC m=+0.154444704 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:33:06 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:33:06 localhost python3.9[217167]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64205 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545E1E20000000001030307) Dec 2 04:33:06 localhost python3.9[217302]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:33:06 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:33:06 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:33:06 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:33:06 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:33:06 localhost systemd-modules-load[217306]: Module 'msr' is built in Dec 2 04:33:06 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:33:07 localhost python3.9[217416]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:33:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57601 DF PROTO=TCP SPT=38354 DPT=9100 SEQ=2652238194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545EF230000000001030307) Dec 2 04:33:10 localhost systemd[1]: tmp-crun.1LYvht.mount: Deactivated successfully. Dec 2 04:33:10 localhost podman[217419]: 2025-12-02 09:33:10.122681505 +0000 UTC m=+0.123239178 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:33:10 localhost podman[217419]: 2025-12-02 09:33:10.15323227 +0000 UTC m=+0.153789903 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:33:10 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:33:11 localhost systemd[1]: Reloading. Dec 2 04:33:11 localhost systemd-rc-local-generator[217471]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:11 localhost systemd-sysv-generator[217475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: Reloading. Dec 2 04:33:11 localhost systemd-rc-local-generator[217509]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:11 localhost systemd-sysv-generator[217512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Dec 2 04:33:12 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 2 04:33:12 localhost lvm[217557]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 04:33:12 localhost lvm[217558]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 04:33:12 localhost lvm[217557]: VG ceph_vg1 finished Dec 2 04:33:12 localhost lvm[217558]: VG ceph_vg0 finished Dec 2 04:33:12 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:33:12 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:33:12 localhost systemd[1]: Reloading. Dec 2 04:33:12 localhost systemd-sysv-generator[217612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:12 localhost systemd-rc-local-generator[217609]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25424 DF PROTO=TCP SPT=57064 DPT=9100 SEQ=3621728756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545F9230000000001030307) Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:33:13 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:33:13 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:33:13 localhost systemd[1]: man-db-cache-update.service: Consumed 1.252s CPU time. Dec 2 04:33:13 localhost systemd[1]: run-r3d2f3b322466429da446a5651b0cf14c.service: Deactivated successfully. Dec 2 04:33:15 localhost python3.9[218851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43103 DF PROTO=TCP SPT=35618 DPT=9102 SEQ=2384408096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546062E0000000001030307) Dec 2 04:33:16 localhost python3.9[218965]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:17 localhost python3.9[219143]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:33:17 localhost systemd[1]: Reloading. Dec 2 04:33:17 localhost systemd-rc-local-generator[219170]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:17 localhost systemd-sysv-generator[219173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost python3.9[219304]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:33:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64207 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54611220000000001030307) Dec 2 04:33:18 localhost network[219321]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:33:18 localhost network[219322]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:33:18 localhost network[219323]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:33:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60631 DF PROTO=TCP SPT=44204 DPT=9101 SEQ=328230410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5461D220000000001030307) Dec 2 04:33:26 localhost python3.9[219558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60949 DF PROTO=TCP SPT=56422 DPT=9101 SEQ=139676450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54631A20000000001030307) Dec 2 04:33:27 localhost sshd[219648]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:33:27 localhost python3.9[219671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:28 localhost python3.9[219782]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:29 localhost python3.9[219893]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:30 localhost python3.9[220004]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:33:30 localhost podman[220115]: 2025-12-02 09:33:30.817013355 +0000 UTC m=+0.097731205 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:33:30 localhost podman[220115]: 2025-12-02 09:33:30.85338346 +0000 UTC m=+0.134101360 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:33:30 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:33:31 localhost python3.9[220116]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43107 DF PROTO=TCP SPT=35618 DPT=9102 SEQ=2384408096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54643230000000001030307) Dec 2 04:33:31 localhost python3.9[220246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:32 localhost python3.9[220357]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16438 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5464B010000000001030307) Dec 2 04:33:33 localhost python3.9[220468]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:34 localhost python3.9[220578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16439 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5464F230000000001030307) Dec 2 04:33:35 localhost python3.9[220688]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:36 localhost python3.9[220798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:33:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16440 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54657220000000001030307) Dec 2 04:33:36 localhost podman[220908]: 2025-12-02 09:33:36.824092683 +0000 UTC m=+0.094284591 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:33:36 localhost podman[220908]: 2025-12-02 09:33:36.859713294 +0000 UTC m=+0.129905242 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 04:33:36 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:33:36 localhost python3.9[220909]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:37 localhost python3.9[221042]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:38 localhost python3.9[221152]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:39 localhost python3.9[221262]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:39 localhost python3.9[221372]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4241 DF PROTO=TCP SPT=50620 DPT=9100 SEQ=3217129148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54665230000000001030307) Dec 2 04:33:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:33:40 localhost systemd[1]: tmp-crun.2OzpLc.mount: Deactivated successfully. Dec 2 04:33:40 localhost podman[221483]: 2025-12-02 09:33:40.448177374 +0000 UTC m=+0.082514447 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:33:40 localhost podman[221483]: 2025-12-02 09:33:40.479403285 +0000 UTC m=+0.113740418 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Dec 2 04:33:40 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:33:40 localhost python3.9[221482]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:41 localhost python3.9[221611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:41 localhost python3.9[221721]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:42 localhost python3.9[221831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:42 localhost python3.9[221941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33207 DF PROTO=TCP SPT=60846 DPT=9105 SEQ=1117703903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54670230000000001030307) Dec 2 04:33:43 localhost python3.9[222051]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:44 localhost python3.9[222161]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:45 localhost python3.9[222271]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42178 DF PROTO=TCP SPT=39690 DPT=9105 SEQ=325339395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5467B230000000001030307) Dec 2 04:33:46 localhost python3.9[222381]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:33:47 localhost python3.9[222491]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:33:47 localhost systemd[1]: Reloading. Dec 2 04:33:47 localhost systemd-rc-local-generator[222516]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:47 localhost systemd-sysv-generator[222521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost python3.9[222637]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16442 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54687220000000001030307) Dec 2 04:33:49 localhost python3.9[222748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:50 localhost python3.9[222859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:50 localhost python3.9[222970]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:51 localhost python3.9[223081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:51 localhost python3.9[223192]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:52 localhost python3.9[223303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:53 localhost python3.9[223414]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31187 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54697230000000001030307) Dec 2 04:33:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31188 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546A6E20000000001030307) Dec 2 04:33:57 localhost python3.9[223525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33210 DF PROTO=TCP SPT=60846 DPT=9105 SEQ=1117703903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546A9230000000001030307) Dec 2 04:33:57 localhost python3.9[223635]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:59 localhost python3.9[223745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:59 localhost python3.9[223855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:00 localhost python3.9[223965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:34:01 localhost systemd[1]: tmp-crun.leTjHV.mount: Deactivated successfully. Dec 2 04:34:01 localhost podman[224076]: 2025-12-02 09:34:01.011496812 +0000 UTC m=+0.075155848 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:34:01 localhost podman[224076]: 2025-12-02 09:34:01.023109723 +0000 UTC m=+0.086768759 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0) Dec 2 04:34:01 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:34:01 localhost python3.9[224075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26139 DF PROTO=TCP SPT=33942 DPT=9102 SEQ=915411990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546B7230000000001030307) Dec 2 04:34:01 localhost python3.9[224204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:02 localhost python3.9[224314]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:02 localhost python3.9[224424]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:34:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:34:03.142 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:34:03.142 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:03 localhost python3.9[224534]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=273 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=521459412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546C0310000000001030307) Dec 2 04:34:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=275 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=521459412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546CC220000000001030307) Dec 2 04:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:34:07 localhost systemd[1]: tmp-crun.YeanwO.mount: Deactivated successfully. Dec 2 04:34:07 localhost podman[224552]: 2025-12-02 09:34:07.0718719 +0000 UTC m=+0.075376255 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 2 04:34:07 localhost podman[224552]: 2025-12-02 09:34:07.138014207 +0000 UTC m=+0.141518582 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:34:07 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19577 DF PROTO=TCP SPT=59234 DPT=9100 SEQ=3569422483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546D9220000000001030307) Dec 2 04:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:34:10 localhost podman[224671]: 2025-12-02 09:34:10.708202949 +0000 UTC m=+0.086817651 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 04:34:10 localhost podman[224671]: 2025-12-02 09:34:10.740831644 +0000 UTC m=+0.119446316 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:34:10 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:34:10 localhost python3.9[224670]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 2 04:34:11 localhost python3.9[224801]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:34:12 localhost python3.9[224917]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:34:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33344 DF PROTO=TCP SPT=41786 DPT=9105 SEQ=1859910421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546E5630000000001030307) Dec 2 04:34:13 localhost sshd[224943]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:34:13 localhost systemd-logind[760]: New session 54 of user zuul. Dec 2 04:34:13 localhost systemd[1]: Started Session 54 of User zuul. Dec 2 04:34:13 localhost systemd[1]: session-54.scope: Deactivated successfully. Dec 2 04:34:13 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Dec 2 04:34:13 localhost systemd-logind[760]: Removed session 54. Dec 2 04:34:14 localhost python3.9[225054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:15 localhost python3.9[225140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668054.120079-3391-212330499939368/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:15 localhost python3.9[225248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:15 localhost python3.9[225303]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35276 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546F08E0000000001030307) Dec 2 04:34:16 localhost python3.9[225411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:17 localhost python3.9[225497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668056.1458085-3391-212501758833158/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:17 localhost python3.9[225605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:18 localhost python3.9[225691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668057.1997192-3391-227233417575978/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=2618deabb92e3bb6763a4ba7147e78332a2d3a7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:18 localhost python3.9[225834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35278 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546FCA20000000001030307) Dec 2 04:34:19 localhost python3.9[225940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668058.2300456-3391-89488593237671/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:19 localhost python3.9[226061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:20 localhost python3.9[226165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668059.2794132-3391-134617593127501/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:21 localhost python3.9[226275]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:21 localhost python3.9[226385]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31190 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54707230000000001030307) Dec 2 04:34:22 localhost python3.9[226495]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:23 localhost python3.9[226607]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:23 localhost python3.9[226715]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:24 localhost python3.9[226825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:24 localhost python3.9[226911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668064.0266976-3767-250150558813287/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:25 localhost python3.9[227019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:26 localhost python3.9[227105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668065.2211702-3811-253836097410432/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:27 localhost python3.9[227215]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 2 04:34:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22783 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=1731321425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5471C220000000001030307) Dec 2 04:34:27 localhost python3.9[227325]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:34:28 localhost python3[227435]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:34:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35280 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5472D230000000001030307) Dec 2 04:34:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:34:32 localhost podman[227462]: 2025-12-02 09:34:32.087481095 +0000 UTC m=+0.091888049 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:34:32 localhost podman[227462]: 2025-12-02 09:34:32.094182633 +0000 UTC m=+0.098589617 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 04:34:32 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:34:32 localhost sshd[227492]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:34:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46519 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54735610000000001030307) Dec 2 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46520 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54739620000000001030307) Dec 2 04:34:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46521 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54741630000000001030307) Dec 2 04:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:34:40 localhost podman[227508]: 2025-12-02 09:34:40.100886642 +0000 UTC m=+2.101207369 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:34:40 localhost podman[227449]: 2025-12-02 09:34:28.898360153 +0000 UTC m=+0.055670532 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:40 localhost podman[227508]: 2025-12-02 09:34:40.190776457 +0000 UTC m=+2.191097174 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:34:40 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49518 DF PROTO=TCP SPT=57808 DPT=9100 SEQ=3813979202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5474F230000000001030307) Dec 2 04:34:40 localhost podman[227554]: Dec 2 04:34:40 localhost podman[227554]: 2025-12-02 09:34:40.403419111 +0000 UTC m=+0.088480173 container create 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:34:40 localhost podman[227554]: 2025-12-02 09:34:40.362944121 +0000 UTC m=+0.048005203 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:40 localhost python3[227435]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Dec 2 04:34:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:34:41 localhost systemd[1]: tmp-crun.SQCzJO.mount: Deactivated successfully. Dec 2 04:34:41 localhost podman[227610]: 2025-12-02 09:34:41.133937539 +0000 UTC m=+0.134248496 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:34:41 localhost podman[227610]: 2025-12-02 09:34:41.164960914 +0000 UTC m=+0.165271881 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:34:41 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:34:43 localhost python3.9[227720]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10786 DF PROTO=TCP SPT=42652 DPT=9105 SEQ=18637374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5475A620000000001030307) Dec 2 04:34:44 localhost python3.9[227832]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 2 04:34:44 localhost python3.9[227942]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:34:45 localhost python3[228052]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51877 DF PROTO=TCP SPT=44782 DPT=9102 SEQ=1218652330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54765BE0000000001030307) Dec 2 04:34:46 localhost python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:34:46 localhost podman[228102]: 2025-12-02 09:34:46.355682187 +0000 UTC m=+0.088854258 container remove 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 04:34:46 localhost python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Dec 2 04:34:46 localhost podman[228115]: Dec 2 04:34:46 localhost podman[228115]: 2025-12-02 09:34:46.448192261 +0000 UTC m=+0.079155580 container create e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:34:46 localhost podman[228115]: 2025-12-02 09:34:46.40038795 +0000 UTC m=+0.031351299 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:46 localhost python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Dec 2 04:34:47 localhost python3.9[228261]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:48 localhost python3.9[228373]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:48 localhost python3.9[228482]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668088.1030197-4086-222717080797742/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46523 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54771230000000001030307) Dec 2 04:34:49 localhost python3.9[228537]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:34:49 localhost systemd[1]: Reloading. Dec 2 04:34:49 localhost systemd-rc-local-generator[228559]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:34:49 localhost systemd-sysv-generator[228565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost python3.9[228628]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:34:50 localhost systemd[1]: Reloading. Dec 2 04:34:50 localhost systemd-rc-local-generator[228655]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:34:50 localhost systemd-sysv-generator[228661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: Starting nova_compute container... Dec 2 04:34:50 localhost systemd[1]: Started libcrun container. Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost podman[228669]: 2025-12-02 09:34:50.706340083 +0000 UTC m=+0.105003202 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 04:34:50 localhost podman[228669]: 2025-12-02 09:34:50.71877622 +0000 UTC m=+0.117439349 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 04:34:50 localhost podman[228669]: nova_compute Dec 2 04:34:50 localhost systemd[1]: Started nova_compute container. Dec 2 04:34:50 localhost nova_compute[228682]: + sudo -E kolla_set_configs Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Validating config file Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying service configuration files Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Deleting /etc/ceph Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Writing out command to execute Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[228682]: ++ cat /run_command Dec 2 04:34:50 localhost nova_compute[228682]: + CMD=nova-compute Dec 2 04:34:50 localhost nova_compute[228682]: + ARGS= Dec 2 04:34:50 localhost nova_compute[228682]: + sudo kolla_copy_cacerts Dec 2 04:34:50 localhost nova_compute[228682]: + [[ ! -n '' ]] Dec 2 04:34:50 localhost nova_compute[228682]: + . kolla_extend_start Dec 2 04:34:50 localhost nova_compute[228682]: Running command: 'nova-compute' Dec 2 04:34:50 localhost nova_compute[228682]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:34:50 localhost nova_compute[228682]: + umask 0022 Dec 2 04:34:50 localhost nova_compute[228682]: + exec nova-compute Dec 2 04:34:51 localhost python3.9[228802]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22785 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=1731321425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5477D230000000001030307) Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.567 228686 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.568 228686 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.568 228686 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.568 228686 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.690 228686 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.700 228686 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:34:52 localhost nova_compute[228682]: 2025-12-02 09:34:52.700 228686 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.091 228686 INFO nova.virt.driver [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:34:53 localhost python3.9[228914]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.210 228686 INFO nova.compute.provider_config [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.218 228686 WARNING nova.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console_host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.288 228686 WARNING oslo_config.cfg [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:34:53 localhost nova_compute[228682]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:34:53 localhost nova_compute[228682]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:34:53 localhost nova_compute[228682]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:34:53 localhost nova_compute[228682]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.350 228686 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.383 228686 INFO nova.virt.node [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.384 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:34:53 localhost systemd[1]: Started libvirt QEMU daemon. Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.448 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.451 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.451 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:34:53 localhost nova_compute[228682]: 2025-12-02 09:34:53.469 228686 DEBUG nova.virt.libvirt.volume.mount [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:34:53 localhost python3.9[229075]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.350 228686 INFO nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host capabilities Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 64aa5208-7bf7-490c-857b-3c1a3cae8bb3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: x86_64 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v4 Dec 2 04:34:54 localhost nova_compute[228682]: AMD Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tcp Dec 2 04:34:54 localhost nova_compute[228682]: rdma Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 16116612 Dec 2 04:34:54 localhost nova_compute[228682]: 4029153 Dec 2 04:34:54 localhost nova_compute[228682]: 0 Dec 2 04:34:54 localhost nova_compute[228682]: 0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: selinux Dec 2 04:34:54 localhost nova_compute[228682]: 0 Dec 2 04:34:54 localhost nova_compute[228682]: system_u:system_r:svirt_t:s0 Dec 2 04:34:54 localhost nova_compute[228682]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: dac Dec 2 04:34:54 localhost nova_compute[228682]: 0 Dec 2 04:34:54 localhost nova_compute[228682]: +107:+107 Dec 2 04:34:54 localhost nova_compute[228682]: +107:+107 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: hvm Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 32 Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-i440fx-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.8.0 Dec 2 04:34:54 localhost nova_compute[228682]: q35 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.4.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.5.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.3.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.4.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.2.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.2.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.0.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.0.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.1.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: hvm Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 64 Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-i440fx-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.8.0 Dec 2 04:34:54 localhost nova_compute[228682]: q35 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.4.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.5.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.3.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.4.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.2.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.2.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.0.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.0.0 Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel8.1.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: #033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.360 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.378 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-i440fx-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: i686 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: rom Dec 2 04:34:54 localhost nova_compute[228682]: pflash Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: yes Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: AMD Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 486 Dec 2 04:34:54 localhost nova_compute[228682]: 486-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Conroe Dec 2 04:34:54 localhost nova_compute[228682]: Conroe-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-IBPB Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v4 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v1 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v2 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v6 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v7 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Penryn Dec 2 04:34:54 localhost nova_compute[228682]: Penryn-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Westmere Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v2 Dec 2 04:34:54 localhost nova_compute[228682]: athlon Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: athlon-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: kvm32 Dec 2 04:34:54 localhost nova_compute[228682]: kvm32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: n270 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: n270-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pentium Dec 2 04:34:54 localhost nova_compute[228682]: pentium-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: phenom Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: phenom-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu32 Dec 2 04:34:54 localhost nova_compute[228682]: qemu32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: anonymous Dec 2 04:34:54 localhost nova_compute[228682]: memfd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: disk Dec 2 04:34:54 localhost nova_compute[228682]: cdrom Dec 2 04:34:54 localhost nova_compute[228682]: floppy Dec 2 04:34:54 localhost nova_compute[228682]: lun Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: ide Dec 2 04:34:54 localhost nova_compute[228682]: fdc Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: sata Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: vnc Dec 2 04:34:54 localhost nova_compute[228682]: egl-headless Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: subsystem Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: mandatory Dec 2 04:34:54 localhost nova_compute[228682]: requisite Dec 2 04:34:54 localhost nova_compute[228682]: optional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: pci Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: random Dec 2 04:34:54 localhost nova_compute[228682]: egd Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: path Dec 2 04:34:54 localhost nova_compute[228682]: handle Dec 2 04:34:54 localhost nova_compute[228682]: virtiofs Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tpm-tis Dec 2 04:34:54 localhost nova_compute[228682]: tpm-crb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: emulator Dec 2 04:34:54 localhost nova_compute[228682]: external Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 2.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: passt Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: isa Dec 2 04:34:54 localhost nova_compute[228682]: hyperv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: null Dec 2 04:34:54 localhost nova_compute[228682]: vc Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: dev Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: pipe Dec 2 04:34:54 localhost nova_compute[228682]: stdio Dec 2 04:34:54 localhost nova_compute[228682]: udp Dec 2 04:34:54 localhost nova_compute[228682]: tcp Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: qemu-vdagent Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: relaxed Dec 2 04:34:54 localhost nova_compute[228682]: vapic Dec 2 04:34:54 localhost nova_compute[228682]: spinlocks Dec 2 04:34:54 localhost nova_compute[228682]: vpindex Dec 2 04:34:54 localhost nova_compute[228682]: runtime Dec 2 04:34:54 localhost nova_compute[228682]: synic Dec 2 04:34:54 localhost nova_compute[228682]: stimer Dec 2 04:34:54 localhost nova_compute[228682]: reset Dec 2 04:34:54 localhost nova_compute[228682]: vendor_id Dec 2 04:34:54 localhost nova_compute[228682]: frequencies Dec 2 04:34:54 localhost nova_compute[228682]: reenlightenment Dec 2 04:34:54 localhost nova_compute[228682]: tlbflush Dec 2 04:34:54 localhost nova_compute[228682]: ipi Dec 2 04:34:54 localhost nova_compute[228682]: avic Dec 2 04:34:54 localhost nova_compute[228682]: emsr_bitmap Dec 2 04:34:54 localhost nova_compute[228682]: xmm_input Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 4095 Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Linux KVM Hv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tdx Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.387 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.8.0 Dec 2 04:34:54 localhost nova_compute[228682]: i686 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: rom Dec 2 04:34:54 localhost nova_compute[228682]: pflash Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: yes Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: AMD Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 486 Dec 2 04:34:54 localhost nova_compute[228682]: 486-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Conroe Dec 2 04:34:54 localhost nova_compute[228682]: Conroe-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-IBPB Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v4 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v1 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v2 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v6 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v7 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Penryn Dec 2 04:34:54 localhost nova_compute[228682]: Penryn-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Westmere Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v2 Dec 2 04:34:54 localhost nova_compute[228682]: athlon Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: athlon-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: kvm32 Dec 2 04:34:54 localhost nova_compute[228682]: kvm32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: n270 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: n270-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pentium Dec 2 04:34:54 localhost nova_compute[228682]: pentium-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: phenom Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: phenom-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu32 Dec 2 04:34:54 localhost nova_compute[228682]: qemu32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: anonymous Dec 2 04:34:54 localhost nova_compute[228682]: memfd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: disk Dec 2 04:34:54 localhost nova_compute[228682]: cdrom Dec 2 04:34:54 localhost nova_compute[228682]: floppy Dec 2 04:34:54 localhost nova_compute[228682]: lun Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: fdc Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: sata Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: vnc Dec 2 04:34:54 localhost nova_compute[228682]: egl-headless Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: subsystem Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: mandatory Dec 2 04:34:54 localhost nova_compute[228682]: requisite Dec 2 04:34:54 localhost nova_compute[228682]: optional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: pci Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: random Dec 2 04:34:54 localhost nova_compute[228682]: egd Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: path Dec 2 04:34:54 localhost nova_compute[228682]: handle Dec 2 04:34:54 localhost nova_compute[228682]: virtiofs Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tpm-tis Dec 2 04:34:54 localhost nova_compute[228682]: tpm-crb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: emulator Dec 2 04:34:54 localhost nova_compute[228682]: external Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 2.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: passt Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: isa Dec 2 04:34:54 localhost nova_compute[228682]: hyperv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: null Dec 2 04:34:54 localhost nova_compute[228682]: vc Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: dev Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: pipe Dec 2 04:34:54 localhost nova_compute[228682]: stdio Dec 2 04:34:54 localhost nova_compute[228682]: udp Dec 2 04:34:54 localhost nova_compute[228682]: tcp Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: qemu-vdagent Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: relaxed Dec 2 04:34:54 localhost nova_compute[228682]: vapic Dec 2 04:34:54 localhost nova_compute[228682]: spinlocks Dec 2 04:34:54 localhost nova_compute[228682]: vpindex Dec 2 04:34:54 localhost nova_compute[228682]: runtime Dec 2 04:34:54 localhost nova_compute[228682]: synic Dec 2 04:34:54 localhost nova_compute[228682]: stimer Dec 2 04:34:54 localhost nova_compute[228682]: reset Dec 2 04:34:54 localhost nova_compute[228682]: vendor_id Dec 2 04:34:54 localhost nova_compute[228682]: frequencies Dec 2 04:34:54 localhost nova_compute[228682]: reenlightenment Dec 2 04:34:54 localhost nova_compute[228682]: tlbflush Dec 2 04:34:54 localhost nova_compute[228682]: ipi Dec 2 04:34:54 localhost nova_compute[228682]: avic Dec 2 04:34:54 localhost nova_compute[228682]: emsr_bitmap Dec 2 04:34:54 localhost nova_compute[228682]: xmm_input Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 4095 Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Linux KVM Hv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tdx Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.434 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.439 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-i440fx-rhel7.6.0 Dec 2 04:34:54 localhost nova_compute[228682]: x86_64 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: rom Dec 2 04:34:54 localhost nova_compute[228682]: pflash Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: yes Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: AMD Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 486 Dec 2 04:34:54 localhost nova_compute[228682]: 486-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Conroe Dec 2 04:34:54 localhost nova_compute[228682]: Conroe-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-IBPB Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v4 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v1 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v2 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v6 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v7 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Penryn Dec 2 04:34:54 localhost nova_compute[228682]: Penryn-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Westmere Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v2 Dec 2 04:34:54 localhost nova_compute[228682]: athlon Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: athlon-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: kvm32 Dec 2 04:34:54 localhost nova_compute[228682]: kvm32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: n270 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: n270-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pentium Dec 2 04:34:54 localhost nova_compute[228682]: pentium-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: phenom Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: phenom-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu32 Dec 2 04:34:54 localhost nova_compute[228682]: qemu32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: anonymous Dec 2 04:34:54 localhost nova_compute[228682]: memfd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: disk Dec 2 04:34:54 localhost nova_compute[228682]: cdrom Dec 2 04:34:54 localhost nova_compute[228682]: floppy Dec 2 04:34:54 localhost nova_compute[228682]: lun Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: ide Dec 2 04:34:54 localhost nova_compute[228682]: fdc Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: sata Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: vnc Dec 2 04:34:54 localhost nova_compute[228682]: egl-headless Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: subsystem Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: mandatory Dec 2 04:34:54 localhost nova_compute[228682]: requisite Dec 2 04:34:54 localhost nova_compute[228682]: optional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: pci Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: random Dec 2 04:34:54 localhost nova_compute[228682]: egd Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: path Dec 2 04:34:54 localhost nova_compute[228682]: handle Dec 2 04:34:54 localhost nova_compute[228682]: virtiofs Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tpm-tis Dec 2 04:34:54 localhost nova_compute[228682]: tpm-crb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: emulator Dec 2 04:34:54 localhost nova_compute[228682]: external Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 2.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: passt Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: isa Dec 2 04:34:54 localhost nova_compute[228682]: hyperv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: null Dec 2 04:34:54 localhost nova_compute[228682]: vc Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: dev Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: pipe Dec 2 04:34:54 localhost nova_compute[228682]: stdio Dec 2 04:34:54 localhost nova_compute[228682]: udp Dec 2 04:34:54 localhost nova_compute[228682]: tcp Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: qemu-vdagent Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: relaxed Dec 2 04:34:54 localhost nova_compute[228682]: vapic Dec 2 04:34:54 localhost nova_compute[228682]: spinlocks Dec 2 04:34:54 localhost nova_compute[228682]: vpindex Dec 2 04:34:54 localhost nova_compute[228682]: runtime Dec 2 04:34:54 localhost nova_compute[228682]: synic Dec 2 04:34:54 localhost nova_compute[228682]: stimer Dec 2 04:34:54 localhost nova_compute[228682]: reset Dec 2 04:34:54 localhost nova_compute[228682]: vendor_id Dec 2 04:34:54 localhost nova_compute[228682]: frequencies Dec 2 04:34:54 localhost nova_compute[228682]: reenlightenment Dec 2 04:34:54 localhost nova_compute[228682]: tlbflush Dec 2 04:34:54 localhost nova_compute[228682]: ipi Dec 2 04:34:54 localhost nova_compute[228682]: avic Dec 2 04:34:54 localhost nova_compute[228682]: emsr_bitmap Dec 2 04:34:54 localhost nova_compute[228682]: xmm_input Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 4095 Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Linux KVM Hv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tdx Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.518 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/libexec/qemu-kvm Dec 2 04:34:54 localhost nova_compute[228682]: kvm Dec 2 04:34:54 localhost nova_compute[228682]: pc-q35-rhel9.8.0 Dec 2 04:34:54 localhost nova_compute[228682]: x86_64 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: efi Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:34:54 localhost nova_compute[228682]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: rom Dec 2 04:34:54 localhost nova_compute[228682]: pflash Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: yes Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: yes Dec 2 04:34:54 localhost nova_compute[228682]: no Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: AMD Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 486 Dec 2 04:34:54 localhost nova_compute[228682]: 486-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Broadwell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cascadelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Conroe Dec 2 04:34:54 localhost nova_compute[228682]: Conroe-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Cooperlake-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Denverton-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dhyana-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Genoa-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-IBPB Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Milan-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-Rome-v4 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v1 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v2 Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: EPYC-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: GraniteRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Haswell-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-noTSX Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v6 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Icelake-Server-v7 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: IvyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: KnightsMill-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Nehalem-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G1-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G4-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Opteron_G5-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Penryn Dec 2 04:34:54 localhost nova_compute[228682]: Penryn-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: SandyBridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SapphireRapids-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: SierraForest-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Client-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-noTSX-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Skylake-Server-v5 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v2 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v3 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Snowridge-v4 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Westmere Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-IBRS Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Westmere-v2 Dec 2 04:34:54 localhost nova_compute[228682]: athlon Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: athlon-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: core2duo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: coreduo-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: kvm32 Dec 2 04:34:54 localhost nova_compute[228682]: kvm32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64 Dec 2 04:34:54 localhost nova_compute[228682]: kvm64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: n270 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: n270-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pentium Dec 2 04:34:54 localhost nova_compute[228682]: pentium-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2 Dec 2 04:34:54 localhost nova_compute[228682]: pentium2-v1 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3 Dec 2 04:34:54 localhost nova_compute[228682]: pentium3-v1 Dec 2 04:34:54 localhost nova_compute[228682]: phenom Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: phenom-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu32 Dec 2 04:34:54 localhost nova_compute[228682]: qemu32-v1 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64 Dec 2 04:34:54 localhost nova_compute[228682]: qemu64-v1 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: anonymous Dec 2 04:34:54 localhost nova_compute[228682]: memfd Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: disk Dec 2 04:34:54 localhost nova_compute[228682]: cdrom Dec 2 04:34:54 localhost nova_compute[228682]: floppy Dec 2 04:34:54 localhost nova_compute[228682]: lun Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: fdc Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: sata Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: vnc Dec 2 04:34:54 localhost nova_compute[228682]: egl-headless Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: subsystem Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: mandatory Dec 2 04:34:54 localhost nova_compute[228682]: requisite Dec 2 04:34:54 localhost nova_compute[228682]: optional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: pci Dec 2 04:34:54 localhost nova_compute[228682]: scsi Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: virtio Dec 2 04:34:54 localhost nova_compute[228682]: virtio-transitional Dec 2 04:34:54 localhost nova_compute[228682]: virtio-non-transitional Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: random Dec 2 04:34:54 localhost nova_compute[228682]: egd Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: path Dec 2 04:34:54 localhost nova_compute[228682]: handle Dec 2 04:34:54 localhost nova_compute[228682]: virtiofs Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tpm-tis Dec 2 04:34:54 localhost nova_compute[228682]: tpm-crb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: emulator Dec 2 04:34:54 localhost nova_compute[228682]: external Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 2.0 Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: usb Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: qemu Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: builtin Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: default Dec 2 04:34:54 localhost nova_compute[228682]: passt Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: isa Dec 2 04:34:54 localhost nova_compute[228682]: hyperv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: null Dec 2 04:34:54 localhost nova_compute[228682]: vc Dec 2 04:34:54 localhost nova_compute[228682]: pty Dec 2 04:34:54 localhost nova_compute[228682]: dev Dec 2 04:34:54 localhost nova_compute[228682]: file Dec 2 04:34:54 localhost nova_compute[228682]: pipe Dec 2 04:34:54 localhost nova_compute[228682]: stdio Dec 2 04:34:54 localhost nova_compute[228682]: udp Dec 2 04:34:54 localhost nova_compute[228682]: tcp Dec 2 04:34:54 localhost nova_compute[228682]: unix Dec 2 04:34:54 localhost nova_compute[228682]: qemu-vdagent Dec 2 04:34:54 localhost nova_compute[228682]: dbus Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: relaxed Dec 2 04:34:54 localhost nova_compute[228682]: vapic Dec 2 04:34:54 localhost nova_compute[228682]: spinlocks Dec 2 04:34:54 localhost nova_compute[228682]: vpindex Dec 2 04:34:54 localhost nova_compute[228682]: runtime Dec 2 04:34:54 localhost nova_compute[228682]: synic Dec 2 04:34:54 localhost nova_compute[228682]: stimer Dec 2 04:34:54 localhost nova_compute[228682]: reset Dec 2 04:34:54 localhost nova_compute[228682]: vendor_id Dec 2 04:34:54 localhost nova_compute[228682]: frequencies Dec 2 04:34:54 localhost nova_compute[228682]: reenlightenment Dec 2 04:34:54 localhost nova_compute[228682]: tlbflush Dec 2 04:34:54 localhost nova_compute[228682]: ipi Dec 2 04:34:54 localhost nova_compute[228682]: avic Dec 2 04:34:54 localhost nova_compute[228682]: emsr_bitmap Dec 2 04:34:54 localhost nova_compute[228682]: xmm_input Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: 4095 Dec 2 04:34:54 localhost nova_compute[228682]: on Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: off Dec 2 04:34:54 localhost nova_compute[228682]: Linux KVM Hv Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: tdx Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: Dec 2 04:34:54 localhost nova_compute[228682]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.586 228686 INFO nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Secure Boot support detected#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.589 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.589 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.596 228686 DEBUG nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.611 228686 INFO nova.virt.node [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.624 228686 DEBUG nova.compute.manager [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:34:54 localhost nova_compute[228682]: 2025-12-02 09:34:54.648 228686 INFO nova.compute.manager [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.059 228686 INFO nova.service [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating service version for nova-compute on np0005541914.localdomain from 57 to 66#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.098 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.098 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.099 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.099 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.100 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:55 localhost python3.9[229197]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:34:55 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 120.1 (400 of 333 items), suggesting rotation. Dec 2 04:34:55 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:34:55 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:34:55 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.570 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:34:55 localhost systemd[1]: Started libvirt nodedev daemon. Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.960 228686 WARNING nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.962 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13614MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.963 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:55 localhost nova_compute[228682]: 2025-12-02 09:34:55.963 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.110 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.111 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.133 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.190 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.191 228686 DEBUG nova.compute.provider_tree [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.214 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.288 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.305 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:56 localhost python3.9[229377]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:34:56 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.573 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.574 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.574 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:34:56 localhost nova_compute[228682]: 2025-12-02 09:34:56.575 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:34:56 localhost journal[228953]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 2 04:34:56 localhost journal[228953]: hostname: np0005541914.localdomain Dec 2 04:34:56 localhost journal[228953]: End of file while reading data: Input/output error Dec 2 04:34:56 localhost systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Deactivated successfully. Dec 2 04:34:56 localhost podman[229401]: 2025-12-02 09:34:56.993784121 +0000 UTC m=+0.496911706 container died e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:34:56 localhost systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Consumed 3.645s CPU time. Dec 2 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256-userdata-shm.mount: Deactivated successfully. Dec 2 04:34:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18165 DF PROTO=TCP SPT=46438 DPT=9101 SEQ=2661148423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54791630000000001030307) Dec 2 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168-merged.mount: Deactivated successfully. Dec 2 04:34:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10789 DF PROTO=TCP SPT=42652 DPT=9105 SEQ=18637374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54793220000000001030307) Dec 2 04:35:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51881 DF PROTO=TCP SPT=44782 DPT=9102 SEQ=1218652330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547A1220000000001030307) Dec 2 04:35:01 localhost podman[229401]: 2025-12-02 09:35:01.359398423 +0000 UTC m=+4.862526038 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0) Dec 2 04:35:01 localhost podman[229401]: nova_compute Dec 2 04:35:01 localhost podman[229569]: error opening file `/run/crun/e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256/status`: No such file or directory Dec 2 04:35:01 localhost podman[229558]: 2025-12-02 09:35:01.442897331 +0000 UTC m=+0.056182979 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute) Dec 2 04:35:01 localhost podman[229558]: nova_compute Dec 2 04:35:01 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 2 04:35:01 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:35:01 localhost systemd[1]: Starting nova_compute container... Dec 2 04:35:01 localhost systemd[1]: Started libcrun container. Dec 2 04:35:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:01 localhost podman[229571]: 2025-12-02 09:35:01.585925643 +0000 UTC m=+0.104913300 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute) Dec 2 04:35:01 localhost podman[229571]: 2025-12-02 09:35:01.591981595 +0000 UTC m=+0.110969262 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute) Dec 2 04:35:01 localhost podman[229571]: nova_compute Dec 2 04:35:01 localhost nova_compute[229585]: + sudo -E kolla_set_configs Dec 2 04:35:01 localhost systemd[1]: Started nova_compute container. Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Validating config file Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying service configuration files Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /etc/ceph Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Writing out command to execute Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:01 localhost nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:35:01 localhost nova_compute[229585]: ++ cat /run_command Dec 2 04:35:01 localhost nova_compute[229585]: + CMD=nova-compute Dec 2 04:35:01 localhost nova_compute[229585]: + ARGS= Dec 2 04:35:01 localhost nova_compute[229585]: + sudo kolla_copy_cacerts Dec 2 04:35:01 localhost nova_compute[229585]: + [[ ! -n '' ]] Dec 2 04:35:01 localhost nova_compute[229585]: + . kolla_extend_start Dec 2 04:35:01 localhost nova_compute[229585]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:35:01 localhost nova_compute[229585]: Running command: 'nova-compute' Dec 2 04:35:01 localhost nova_compute[229585]: + umask 0022 Dec 2 04:35:01 localhost nova_compute[229585]: + exec nova-compute Dec 2 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:35:02 localhost podman[229614]: 2025-12-02 09:35:02.335702006 +0000 UTC m=+0.088190488 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible) Dec 2 04:35:02 localhost podman[229614]: 2025-12-02 09:35:02.350244969 +0000 UTC m=+0.102733431 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 04:35:02 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:35:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:35:03.143 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:35:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:35:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.480 229589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.481 229589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.481 229589 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.481 229589 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.609 229589 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14214 DF PROTO=TCP SPT=33630 DPT=9882 SEQ=58620176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547AA910000000001030307) Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.631 229589 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:03 localhost nova_compute[229585]: 2025-12-02 09:35:03.632 229589 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:35:04 localhost python3.9[229731]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.097 229589 INFO nova.virt.driver [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.219 229589 INFO nova.compute.provider_config [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.300 229589 WARNING nova.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.300 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.300 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console_host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 WARNING oslo_config.cfg [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:35:04 localhost nova_compute[229585]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:35:04 localhost nova_compute[229585]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:35:04 localhost nova_compute[229585]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:35:04 localhost nova_compute[229585]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost systemd[1]: Started libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope. Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost systemd[1]: Started libcrun container. Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost podman[229754]: 2025-12-02 09:35:04.419904209 +0000 UTC m=+0.167036128 container init 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.430 229589 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:35:04 localhost podman[229754]: 2025-12-02 09:35:04.430509606 +0000 UTC m=+0.177641525 container start 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:35:04 localhost python3.9[229731]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.451 229589 INFO nova.virt.node [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.452 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.464 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.466 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.467 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.472 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host capabilities Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 64aa5208-7bf7-490c-857b-3c1a3cae8bb3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: x86_64 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v4 Dec 2 04:35:04 localhost nova_compute[229585]: AMD Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tcp Dec 2 04:35:04 localhost nova_compute[229585]: rdma Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 16116612 Dec 2 04:35:04 localhost nova_compute[229585]: 4029153 Dec 2 04:35:04 localhost nova_compute[229585]: 0 Dec 2 04:35:04 localhost nova_compute[229585]: 0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: selinux Dec 2 04:35:04 localhost nova_compute[229585]: 0 Dec 2 04:35:04 localhost nova_compute[229585]: system_u:system_r:svirt_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: dac Dec 2 04:35:04 localhost nova_compute[229585]: 0 Dec 2 04:35:04 localhost nova_compute[229585]: +107:+107 Dec 2 04:35:04 localhost nova_compute[229585]: +107:+107 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: hvm Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 32 Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-i440fx-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.8.0 Dec 2 04:35:04 localhost nova_compute[229585]: q35 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.4.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.5.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.3.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.4.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.2.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.2.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.0.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.0.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.1.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: hvm Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 64 Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-i440fx-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.8.0 Dec 2 04:35:04 localhost nova_compute[229585]: q35 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.4.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.5.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.3.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.4.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.2.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.2.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.0.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.0.0 Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel8.1.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: #033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.479 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.482 229589 DEBUG nova.virt.libvirt.volume.mount [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.484 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-i440fx-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: i686 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: rom Dec 2 04:35:04 localhost nova_compute[229585]: pflash Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: yes Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: AMD Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 486 Dec 2 04:35:04 localhost nova_compute[229585]: 486-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Conroe Dec 2 04:35:04 localhost nova_compute[229585]: Conroe-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-IBPB Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v4 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v1 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v2 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Applying nova statedir ownership Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v1 Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute_init[229774]: INFO:nova_statedir:Nova statedir ownership complete Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v6 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v7 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Penryn Dec 2 04:35:04 localhost nova_compute[229585]: Penryn-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v1 Dec 2 04:35:04 localhost systemd[1]: libpod-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully. Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Westmere Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v2 Dec 2 04:35:04 localhost nova_compute[229585]: athlon Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: athlon-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: kvm32 Dec 2 04:35:04 localhost nova_compute[229585]: kvm32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: n270 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: n270-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pentium Dec 2 04:35:04 localhost nova_compute[229585]: pentium-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: phenom Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: phenom-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu32 Dec 2 04:35:04 localhost nova_compute[229585]: qemu32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: anonymous Dec 2 04:35:04 localhost nova_compute[229585]: memfd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: disk Dec 2 04:35:04 localhost nova_compute[229585]: cdrom Dec 2 04:35:04 localhost nova_compute[229585]: floppy Dec 2 04:35:04 localhost nova_compute[229585]: lun Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: ide Dec 2 04:35:04 localhost nova_compute[229585]: fdc Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: sata Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: vnc Dec 2 04:35:04 localhost nova_compute[229585]: egl-headless Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: subsystem Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: mandatory Dec 2 04:35:04 localhost nova_compute[229585]: requisite Dec 2 04:35:04 localhost nova_compute[229585]: optional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: pci Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: random Dec 2 04:35:04 localhost nova_compute[229585]: egd Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: path Dec 2 04:35:04 localhost nova_compute[229585]: handle Dec 2 04:35:04 localhost nova_compute[229585]: virtiofs Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tpm-tis Dec 2 04:35:04 localhost nova_compute[229585]: tpm-crb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: emulator Dec 2 04:35:04 localhost nova_compute[229585]: external Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 2.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: passt Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: isa Dec 2 04:35:04 localhost nova_compute[229585]: hyperv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: null Dec 2 04:35:04 localhost nova_compute[229585]: vc Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: dev Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: pipe Dec 2 04:35:04 localhost nova_compute[229585]: stdio Dec 2 04:35:04 localhost nova_compute[229585]: udp Dec 2 04:35:04 localhost nova_compute[229585]: tcp Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: qemu-vdagent Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: relaxed Dec 2 04:35:04 localhost nova_compute[229585]: vapic Dec 2 04:35:04 localhost nova_compute[229585]: spinlocks Dec 2 04:35:04 localhost nova_compute[229585]: vpindex Dec 2 04:35:04 localhost nova_compute[229585]: runtime Dec 2 04:35:04 localhost nova_compute[229585]: synic Dec 2 04:35:04 localhost nova_compute[229585]: stimer Dec 2 04:35:04 localhost podman[229775]: 2025-12-02 09:35:04.518224668 +0000 UTC m=+0.067206620 container died 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125) Dec 2 04:35:04 localhost nova_compute[229585]: reset Dec 2 04:35:04 localhost nova_compute[229585]: vendor_id Dec 2 04:35:04 localhost nova_compute[229585]: frequencies Dec 2 04:35:04 localhost nova_compute[229585]: reenlightenment Dec 2 04:35:04 localhost nova_compute[229585]: tlbflush Dec 2 04:35:04 localhost nova_compute[229585]: ipi Dec 2 04:35:04 localhost nova_compute[229585]: avic Dec 2 04:35:04 localhost nova_compute[229585]: emsr_bitmap Dec 2 04:35:04 localhost nova_compute[229585]: xmm_input Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 4095 Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Linux KVM Hv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tdx Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.488 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.8.0 Dec 2 04:35:04 localhost nova_compute[229585]: i686 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: rom Dec 2 04:35:04 localhost nova_compute[229585]: pflash Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: yes Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: AMD Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 486 Dec 2 04:35:04 localhost nova_compute[229585]: 486-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Conroe Dec 2 04:35:04 localhost nova_compute[229585]: Conroe-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-IBPB Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v4 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v1 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v2 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v6 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v7 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Penryn Dec 2 04:35:04 localhost nova_compute[229585]: Penryn-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Westmere Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v2 Dec 2 04:35:04 localhost nova_compute[229585]: athlon Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: athlon-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: kvm32 Dec 2 04:35:04 localhost nova_compute[229585]: kvm32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: n270 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: n270-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pentium Dec 2 04:35:04 localhost nova_compute[229585]: pentium-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: phenom Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: phenom-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu32 Dec 2 04:35:04 localhost nova_compute[229585]: qemu32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: anonymous Dec 2 04:35:04 localhost nova_compute[229585]: memfd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: disk Dec 2 04:35:04 localhost nova_compute[229585]: cdrom Dec 2 04:35:04 localhost nova_compute[229585]: floppy Dec 2 04:35:04 localhost nova_compute[229585]: lun Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: fdc Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: sata Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: vnc Dec 2 04:35:04 localhost nova_compute[229585]: egl-headless Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: subsystem Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: mandatory Dec 2 04:35:04 localhost nova_compute[229585]: requisite Dec 2 04:35:04 localhost nova_compute[229585]: optional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: pci Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: random Dec 2 04:35:04 localhost nova_compute[229585]: egd Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: path Dec 2 04:35:04 localhost nova_compute[229585]: handle Dec 2 04:35:04 localhost nova_compute[229585]: virtiofs Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tpm-tis Dec 2 04:35:04 localhost nova_compute[229585]: tpm-crb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: emulator Dec 2 04:35:04 localhost nova_compute[229585]: external Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 2.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: passt Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: isa Dec 2 04:35:04 localhost nova_compute[229585]: hyperv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: null Dec 2 04:35:04 localhost nova_compute[229585]: vc Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: dev Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: pipe Dec 2 04:35:04 localhost nova_compute[229585]: stdio Dec 2 04:35:04 localhost nova_compute[229585]: udp Dec 2 04:35:04 localhost nova_compute[229585]: tcp Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: qemu-vdagent Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: relaxed Dec 2 04:35:04 localhost nova_compute[229585]: vapic Dec 2 04:35:04 localhost nova_compute[229585]: spinlocks Dec 2 04:35:04 localhost nova_compute[229585]: vpindex Dec 2 04:35:04 localhost nova_compute[229585]: runtime Dec 2 04:35:04 localhost nova_compute[229585]: synic Dec 2 04:35:04 localhost nova_compute[229585]: stimer Dec 2 04:35:04 localhost nova_compute[229585]: reset Dec 2 04:35:04 localhost nova_compute[229585]: vendor_id Dec 2 04:35:04 localhost nova_compute[229585]: frequencies Dec 2 04:35:04 localhost nova_compute[229585]: reenlightenment Dec 2 04:35:04 localhost nova_compute[229585]: tlbflush Dec 2 04:35:04 localhost nova_compute[229585]: ipi Dec 2 04:35:04 localhost nova_compute[229585]: avic Dec 2 04:35:04 localhost nova_compute[229585]: emsr_bitmap Dec 2 04:35:04 localhost nova_compute[229585]: xmm_input Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 4095 Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Linux KVM Hv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tdx Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.512 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.518 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-i440fx-rhel7.6.0 Dec 2 04:35:04 localhost nova_compute[229585]: x86_64 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: rom Dec 2 04:35:04 localhost nova_compute[229585]: pflash Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: yes Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: AMD Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 486 Dec 2 04:35:04 localhost nova_compute[229585]: 486-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Conroe Dec 2 04:35:04 localhost nova_compute[229585]: Conroe-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-IBPB Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v4 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v1 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v2 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v6 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v7 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Penryn Dec 2 04:35:04 localhost nova_compute[229585]: Penryn-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Westmere Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v2 Dec 2 04:35:04 localhost nova_compute[229585]: athlon Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: athlon-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: kvm32 Dec 2 04:35:04 localhost nova_compute[229585]: kvm32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: n270 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: n270-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pentium Dec 2 04:35:04 localhost nova_compute[229585]: pentium-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: phenom Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: phenom-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu32 Dec 2 04:35:04 localhost nova_compute[229585]: qemu32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: anonymous Dec 2 04:35:04 localhost nova_compute[229585]: memfd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: disk Dec 2 04:35:04 localhost nova_compute[229585]: cdrom Dec 2 04:35:04 localhost nova_compute[229585]: floppy Dec 2 04:35:04 localhost nova_compute[229585]: lun Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: ide Dec 2 04:35:04 localhost nova_compute[229585]: fdc Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: sata Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: vnc Dec 2 04:35:04 localhost nova_compute[229585]: egl-headless Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: subsystem Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: mandatory Dec 2 04:35:04 localhost nova_compute[229585]: requisite Dec 2 04:35:04 localhost nova_compute[229585]: optional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: pci Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: random Dec 2 04:35:04 localhost nova_compute[229585]: egd Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: path Dec 2 04:35:04 localhost nova_compute[229585]: handle Dec 2 04:35:04 localhost nova_compute[229585]: virtiofs Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tpm-tis Dec 2 04:35:04 localhost nova_compute[229585]: tpm-crb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: emulator Dec 2 04:35:04 localhost nova_compute[229585]: external Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 2.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: passt Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: isa Dec 2 04:35:04 localhost nova_compute[229585]: hyperv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: null Dec 2 04:35:04 localhost nova_compute[229585]: vc Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: dev Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: pipe Dec 2 04:35:04 localhost nova_compute[229585]: stdio Dec 2 04:35:04 localhost nova_compute[229585]: udp Dec 2 04:35:04 localhost nova_compute[229585]: tcp Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: qemu-vdagent Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: relaxed Dec 2 04:35:04 localhost nova_compute[229585]: vapic Dec 2 04:35:04 localhost nova_compute[229585]: spinlocks Dec 2 04:35:04 localhost nova_compute[229585]: vpindex Dec 2 04:35:04 localhost nova_compute[229585]: runtime Dec 2 04:35:04 localhost nova_compute[229585]: synic Dec 2 04:35:04 localhost nova_compute[229585]: stimer Dec 2 04:35:04 localhost nova_compute[229585]: reset Dec 2 04:35:04 localhost nova_compute[229585]: vendor_id Dec 2 04:35:04 localhost nova_compute[229585]: frequencies Dec 2 04:35:04 localhost nova_compute[229585]: reenlightenment Dec 2 04:35:04 localhost nova_compute[229585]: tlbflush Dec 2 04:35:04 localhost nova_compute[229585]: ipi Dec 2 04:35:04 localhost nova_compute[229585]: avic Dec 2 04:35:04 localhost nova_compute[229585]: emsr_bitmap Dec 2 04:35:04 localhost nova_compute[229585]: xmm_input Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 4095 Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Linux KVM Hv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tdx Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.564 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/libexec/qemu-kvm Dec 2 04:35:04 localhost nova_compute[229585]: kvm Dec 2 04:35:04 localhost nova_compute[229585]: pc-q35-rhel9.8.0 Dec 2 04:35:04 localhost nova_compute[229585]: x86_64 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: efi Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:35:04 localhost nova_compute[229585]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: rom Dec 2 04:35:04 localhost nova_compute[229585]: pflash Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: yes Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: yes Dec 2 04:35:04 localhost nova_compute[229585]: no Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: AMD Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 486 Dec 2 04:35:04 localhost nova_compute[229585]: 486-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Broadwell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cascadelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Conroe Dec 2 04:35:04 localhost nova_compute[229585]: Conroe-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Cooperlake-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Denverton-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dhyana-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Genoa-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-IBPB Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Milan-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-Rome-v4 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v1 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v2 Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: EPYC-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: GraniteRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost podman[229806]: 2025-12-02 09:35:04.670638898 +0000 UTC m=+0.150857862 container cleanup 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost systemd[1]: libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully. Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Haswell-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-noTSX Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v6 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Icelake-Server-v7 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: IvyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: KnightsMill-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Nehalem-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G1-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G4-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Opteron_G5-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Penryn Dec 2 04:35:04 localhost nova_compute[229585]: Penryn-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: SandyBridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SapphireRapids-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: SierraForest-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Client-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-noTSX-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Skylake-Server-v5 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v2 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v3 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Snowridge-v4 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Westmere Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-IBRS Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Westmere-v2 Dec 2 04:35:04 localhost nova_compute[229585]: athlon Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: athlon-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: core2duo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: coreduo-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: kvm32 Dec 2 04:35:04 localhost nova_compute[229585]: kvm32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64 Dec 2 04:35:04 localhost nova_compute[229585]: kvm64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: n270 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: n270-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pentium Dec 2 04:35:04 localhost nova_compute[229585]: pentium-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2 Dec 2 04:35:04 localhost nova_compute[229585]: pentium2-v1 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3 Dec 2 04:35:04 localhost nova_compute[229585]: pentium3-v1 Dec 2 04:35:04 localhost nova_compute[229585]: phenom Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: phenom-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu32 Dec 2 04:35:04 localhost nova_compute[229585]: qemu32-v1 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64 Dec 2 04:35:04 localhost nova_compute[229585]: qemu64-v1 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: anonymous Dec 2 04:35:04 localhost nova_compute[229585]: memfd Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: disk Dec 2 04:35:04 localhost nova_compute[229585]: cdrom Dec 2 04:35:04 localhost nova_compute[229585]: floppy Dec 2 04:35:04 localhost nova_compute[229585]: lun Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: fdc Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: sata Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: vnc Dec 2 04:35:04 localhost nova_compute[229585]: egl-headless Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: subsystem Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: mandatory Dec 2 04:35:04 localhost nova_compute[229585]: requisite Dec 2 04:35:04 localhost nova_compute[229585]: optional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: pci Dec 2 04:35:04 localhost nova_compute[229585]: scsi Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: virtio Dec 2 04:35:04 localhost nova_compute[229585]: virtio-transitional Dec 2 04:35:04 localhost nova_compute[229585]: virtio-non-transitional Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: random Dec 2 04:35:04 localhost nova_compute[229585]: egd Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: path Dec 2 04:35:04 localhost nova_compute[229585]: handle Dec 2 04:35:04 localhost nova_compute[229585]: virtiofs Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tpm-tis Dec 2 04:35:04 localhost nova_compute[229585]: tpm-crb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: emulator Dec 2 04:35:04 localhost nova_compute[229585]: external Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 2.0 Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: usb Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: qemu Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: builtin Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: default Dec 2 04:35:04 localhost nova_compute[229585]: passt Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: isa Dec 2 04:35:04 localhost nova_compute[229585]: hyperv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: null Dec 2 04:35:04 localhost nova_compute[229585]: vc Dec 2 04:35:04 localhost nova_compute[229585]: pty Dec 2 04:35:04 localhost nova_compute[229585]: dev Dec 2 04:35:04 localhost nova_compute[229585]: file Dec 2 04:35:04 localhost nova_compute[229585]: pipe Dec 2 04:35:04 localhost nova_compute[229585]: stdio Dec 2 04:35:04 localhost nova_compute[229585]: udp Dec 2 04:35:04 localhost nova_compute[229585]: tcp Dec 2 04:35:04 localhost nova_compute[229585]: unix Dec 2 04:35:04 localhost nova_compute[229585]: qemu-vdagent Dec 2 04:35:04 localhost nova_compute[229585]: dbus Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: relaxed Dec 2 04:35:04 localhost nova_compute[229585]: vapic Dec 2 04:35:04 localhost nova_compute[229585]: spinlocks Dec 2 04:35:04 localhost nova_compute[229585]: vpindex Dec 2 04:35:04 localhost nova_compute[229585]: runtime Dec 2 04:35:04 localhost nova_compute[229585]: synic Dec 2 04:35:04 localhost nova_compute[229585]: stimer Dec 2 04:35:04 localhost nova_compute[229585]: reset Dec 2 04:35:04 localhost nova_compute[229585]: vendor_id Dec 2 04:35:04 localhost nova_compute[229585]: frequencies Dec 2 04:35:04 localhost nova_compute[229585]: reenlightenment Dec 2 04:35:04 localhost nova_compute[229585]: tlbflush Dec 2 04:35:04 localhost nova_compute[229585]: ipi Dec 2 04:35:04 localhost nova_compute[229585]: avic Dec 2 04:35:04 localhost nova_compute[229585]: emsr_bitmap Dec 2 04:35:04 localhost nova_compute[229585]: xmm_input Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: 4095 Dec 2 04:35:04 localhost nova_compute[229585]: on Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: off Dec 2 04:35:04 localhost nova_compute[229585]: Linux KVM Hv Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: tdx Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: Dec 2 04:35:04 localhost nova_compute[229585]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.617 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.618 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.618 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.618 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Secure Boot support detected#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.622 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.623 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.634 229589 DEBUG nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.660 229589 INFO nova.virt.node [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.673 229589 DEBUG nova.compute.manager [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.711 229589 INFO nova.compute.manager [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.787 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.787 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.788 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.788 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:35:04 localhost nova_compute[229585]: 2025-12-02 09:35:04.789 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.227 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully. Dec 2 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e-userdata-shm.mount: Deactivated successfully. Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.433 229589 WARNING nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.435 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13602MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.435 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.436 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.768 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.768 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.827 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.883 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.884 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.900 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.928 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:35:05 localhost nova_compute[229585]: 2025-12-02 09:35:05.946 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:06 localhost systemd[1]: session-53.scope: Deactivated successfully. Dec 2 04:35:06 localhost systemd[1]: session-53.scope: Consumed 2min 15.864s CPU time. Dec 2 04:35:06 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Dec 2 04:35:06 localhost systemd-logind[760]: Removed session 53. Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.423 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.429 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 2 04:35:06 localhost nova_compute[229585]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.429 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.431 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.431 229589 DEBUG nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.459 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.577 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 2 04:35:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14216 DF PROTO=TCP SPT=33630 DPT=9882 SEQ=58620176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547B6A30000000001030307) Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.826 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.827 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.827 229589 DEBUG nova.service [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.924 229589 DEBUG nova.service [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 2 04:35:06 localhost nova_compute[229585]: 2025-12-02 09:35:06.925 229589 DEBUG nova.servicegroup.drivers.db [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] DB_Driver: join new ServiceGroup member np0005541914.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 2 04:35:08 localhost sshd[229893]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46586 DF PROTO=TCP SPT=48348 DPT=9100 SEQ=2276887532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547C5220000000001030307) Dec 2 04:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:35:11 localhost systemd[1]: tmp-crun.cZ3qbc.mount: Deactivated successfully. Dec 2 04:35:11 localhost podman[229895]: 2025-12-02 09:35:11.100990944 +0000 UTC m=+0.099023673 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:35:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:35:11 localhost podman[229895]: 2025-12-02 09:35:11.210257412 +0000 UTC m=+0.208290151 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 2 04:35:11 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:35:11 localhost podman[229920]: 2025-12-02 09:35:11.281512779 +0000 UTC m=+0.066926591 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:35:11 localhost podman[229920]: 2025-12-02 09:35:11.290931479 +0000 UTC m=+0.076345271 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 04:35:11 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:35:11 localhost sshd[229939]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:11 localhost sshd[229941]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:12 localhost systemd-logind[760]: New session 55 of user zuul. Dec 2 04:35:12 localhost systemd[1]: Started Session 55 of User zuul. Dec 2 04:35:12 localhost python3.9[230052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:35:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47414 DF PROTO=TCP SPT=53308 DPT=9105 SEQ=4237463935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547CFA20000000001030307) Dec 2 04:35:14 localhost python3.9[230166]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:35:14 localhost systemd[1]: Reloading. Dec 2 04:35:14 localhost systemd-sysv-generator[230192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:35:14 localhost systemd-rc-local-generator[230189]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:15 localhost python3.9[230310]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:35:15 localhost network[230327]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:35:15 localhost network[230328]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:35:15 localhost network[230329]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49660 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547DAEE0000000001030307) Dec 2 04:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49662 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547E6E20000000001030307) Dec 2 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:20 localhost sshd[230420]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18167 DF PROTO=TCP SPT=46438 DPT=9101 SEQ=2661148423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547F1220000000001030307) Dec 2 04:35:24 localhost python3.9[230652]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:35:26 localhost python3.9[230763]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:26 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Dec 2 04:35:26 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:35:26 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:35:26 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:35:26 localhost python3.9[230874]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4590 DF PROTO=TCP SPT=42676 DPT=9101 SEQ=2818963414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54806620000000001030307) Dec 2 04:35:27 localhost python3.9[230984]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:35:28 localhost python3.9[231094]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:35:30 localhost python3.9[231204]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:35:30 localhost systemd[1]: Reloading. Dec 2 04:35:30 localhost systemd-sysv-generator[231233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:35:30 localhost systemd-rc-local-generator[231228]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:31 localhost python3.9[231350]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:35:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49664 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54817220000000001030307) Dec 2 04:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:35:33 localhost systemd[1]: tmp-crun.IvMKa8.mount: Deactivated successfully. Dec 2 04:35:33 localhost podman[231462]: 2025-12-02 09:35:33.091043809 +0000 UTC m=+0.108007088 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:35:33 localhost podman[231462]: 2025-12-02 09:35:33.100411948 +0000 UTC m=+0.117375107 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:35:33 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:35:33 localhost python3.9[231461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6224 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5481FC00000000001030307) Dec 2 04:35:34 localhost python3.9[231588]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:34 localhost python3.9[231698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6225 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54823E30000000001030307) Dec 2 04:35:35 localhost python3.9[231784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668134.2179036-362-20581371179412/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=6bcdd3baf62a4327544f9fc7c77a2d84b60d8110 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:36 localhost python3.9[231894]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Dec 2 04:35:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6226 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5482BE20000000001030307) Dec 2 04:35:37 localhost python3.9[232004]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Dec 2 04:35:37 localhost sshd[232116]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:37 localhost python3.9[232115]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:35:38 localhost python3.9[232233]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22111 DF PROTO=TCP SPT=47092 DPT=9100 SEQ=51859181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54839220000000001030307) Dec 2 04:35:40 localhost python3.9[232349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:41 localhost python3.9[232435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668140.0778627-566-268474356238982/.source.conf _original_basename=ceilometer.conf follow=False checksum=9b40aa523dc31738ea523cc852832670ccea382a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:35:42 localhost podman[232527]: 2025-12-02 09:35:42.097298009 +0000 UTC m=+0.092485365 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Dec 2 04:35:42 localhost podman[232527]: 2025-12-02 09:35:42.106864414 +0000 UTC m=+0.102051770 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:35:42 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:35:42 localhost systemd[1]: tmp-crun.uqpZO1.mount: Deactivated successfully. Dec 2 04:35:42 localhost podman[232530]: 2025-12-02 09:35:42.205598386 +0000 UTC m=+0.198332884 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 2 04:35:42 localhost python3.9[232557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:42 localhost podman[232530]: 2025-12-02 09:35:42.312285641 +0000 UTC m=+0.305020099 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:35:42 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:35:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46587 DF PROTO=TCP SPT=48348 DPT=9100 SEQ=2276887532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54843230000000001030307) Dec 2 04:35:42 localhost python3.9[232671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668141.7454681-566-90961809143074/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:43 localhost python3.9[232779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:43 localhost python3.9[232865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668142.9077556-566-148277575007804/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:44 localhost python3.9[232973]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:45 localhost python3.9[233081]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:45 localhost python3.9[233189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:45 localhost nova_compute[229585]: 2025-12-02 09:35:45.926 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:35:45 localhost nova_compute[229585]: 2025-12-02 09:35:45.958 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:35:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29516 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3925383142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548501E0000000001030307) Dec 2 04:35:46 localhost python3.9[233275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668145.4036455-743-176029571531069/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:46 localhost python3.9[233383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:47 localhost python3.9[233438]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:47 localhost python3.9[233546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:48 localhost python3.9[233632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668147.4501677-743-101728219918554/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6228 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5485B220000000001030307) Dec 2 04:35:48 localhost python3.9[233740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:49 localhost python3.9[233826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668148.5240872-743-103922710603031/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:49 localhost python3.9[233934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:50 localhost python3.9[234020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668149.5444264-743-23224108120926/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:50 localhost python3.9[234128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:51 localhost python3.9[234214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668150.5383701-743-216745527289083/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4592 DF PROTO=TCP SPT=42676 DPT=9101 SEQ=2818963414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54867220000000001030307) Dec 2 04:35:52 localhost python3.9[234322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:52 localhost python3.9[234408]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668151.5860927-743-44243444731744/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:53 localhost python3.9[234516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:53 localhost python3.9[234602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668152.7256374-743-178796731201777/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:54 localhost python3.9[234710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:55 localhost python3.9[234796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668153.8120742-743-51626548515279/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:56 localhost python3.9[234904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:56 localhost python3.9[234990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668155.6411343-743-172783995893710/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58328 DF PROTO=TCP SPT=55876 DPT=9101 SEQ=2613760775 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5487BA20000000001030307) Dec 2 04:35:57 localhost python3.9[235098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:58 localhost python3.9[235184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668156.6742268-743-151556739770795/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:59 localhost python3.9[235294]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:59 localhost python3.9[235404]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:35:59 localhost systemd[1]: Reloading. Dec 2 04:36:00 localhost systemd-rc-local-generator[235428]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:00 localhost systemd-sysv-generator[235432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: Listening on Podman API Socket. Dec 2 04:36:01 localhost python3.9[235554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29520 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3925383142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5488D220000000001030307) Dec 2 04:36:01 localhost python3.9[235642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:02 localhost python3.9[235697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:02 localhost python3.9[235785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:36:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:36:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:36:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:36:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50877 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54894F10000000001030307) Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.644 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.644 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.645 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.645 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.659 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.659 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:03 localhost podman[235896]: 2025-12-02 09:36:03.666672856 +0000 UTC m=+0.103753550 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.678 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.678 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:36:03 localhost nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:36:03 localhost podman[235896]: 2025-12-02 09:36:03.682779404 +0000 UTC m=+0.119860078 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:36:03 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:36:03 localhost python3.9[235895]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.139 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.364 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13598MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.425 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.426 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.447 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50878 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54898E20000000001030307) Dec 2 04:36:04 localhost python3.9[236046]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.902 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.909 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.927 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.929 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:36:04 localhost nova_compute[229585]: 2025-12-02 09:36:04.930 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50879 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548A0E20000000001030307) Dec 2 04:36:07 localhost python3[236177]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:07 localhost python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",#012 "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:21:53.58682213Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505175293,#012 "VirtualSize": 505175293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",#012 "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Dec 2 04:36:07 localhost podman[236228]: 2025-12-02 09:36:07.507566399 +0000 UTC m=+0.148951819 container remove 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:36:07 localhost python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Dec 2 04:36:07 localhost podman[236242]: Dec 2 04:36:07 localhost podman[236242]: 2025-12-02 09:36:07.623632979 +0000 UTC m=+0.093866555 container create a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:36:07 localhost podman[236242]: 2025-12-02 09:36:07.579442361 +0000 UTC m=+0.049702528 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 2 04:36:07 localhost python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Dec 2 04:36:08 localhost python3.9[236390]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:36:09 localhost python3.9[236502]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24097 DF PROTO=TCP SPT=35220 DPT=9100 SEQ=2092379517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548AF230000000001030307) Dec 2 04:36:10 localhost python3.9[236611]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668169.7661154-1450-193599814280817/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:11 localhost python3.9[236666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:36:11 localhost systemd[1]: Reloading. Dec 2 04:36:11 localhost systemd-rc-local-generator[236694]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:11 localhost systemd-sysv-generator[236697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost python3.9[236757]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:36:12 localhost systemd[1]: Reloading. Dec 2 04:36:12 localhost systemd-sysv-generator[236803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:12 localhost systemd-rc-local-generator[236797]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:12 localhost podman[236759]: 2025-12-02 09:36:12.274515889 +0000 UTC m=+0.111634465 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 04:36:12 localhost podman[236759]: 2025-12-02 09:36:12.286049845 +0000 UTC m=+0.123168431 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:36:12 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 04:36:12 localhost systemd[1]: Started libcrun container. Dec 2 04:36:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:12 localhost podman[236816]: 2025-12-02 09:36:12.693674485 +0000 UTC m=+0.148066311 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller) Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:12 localhost podman[236817]: 2025-12-02 09:36:12.715264593 +0000 UTC m=+0.165745808 container init a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + sudo -E kolla_set_configs Dec 2 04:36:12 localhost podman[236816]: 2025-12-02 09:36:12.741923657 +0000 UTC m=+0.196315473 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:12 localhost podman[236817]: ceilometer_agent_compute Dec 2 04:36:12 localhost podman[236817]: 2025-12-02 09:36:12.756191799 +0000 UTC m=+0.206673054 container start a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 04:36:12 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:36:12 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Validating config file Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Copying service configuration files Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: INFO:__main__:Writing out command to execute Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: ++ cat /run_command Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + ARGS= Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + sudo kolla_copy_cacerts Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + [[ ! -n '' ]] Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + . kolla_extend_start Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + umask 0022 Dec 2 04:36:12 localhost ceilometer_agent_compute[236841]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 2 04:36:12 localhost podman[236864]: 2025-12-02 09:36:12.857403279 +0000 UTC m=+0.099718285 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 04:36:12 localhost podman[236864]: 2025-12-02 09:36:12.887065957 +0000 UTC m=+0.129380933 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:36:12 localhost podman[236864]: unhealthy Dec 2 04:36:12 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:12 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9455 DF PROTO=TCP SPT=41292 DPT=9105 SEQ=4090967844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548BA220000000001030307) Dec 2 04:36:13 localhost systemd[1]: tmp-crun.8jVGNC.mount: Deactivated successfully. Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.564 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.596 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.597 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.598 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:13 localhost python3.9[236997]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:36:13 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.674 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.762 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.770 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.773 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.773 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.777 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.777 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.845 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.846 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.846 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Dec 2 04:36:13 localhost ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.851 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Dec 2 04:36:13 localhost journal[228953]: End of file while reading data: Input/output error Dec 2 04:36:13 localhost journal[228953]: End of file while reading data: Input/output error Dec 2 04:36:14 localhost systemd[1]: libpod-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: libpod-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Consumed 1.225s CPU time. Dec 2 04:36:14 localhost podman[237004]: 2025-12-02 09:36:14.029083875 +0000 UTC m=+0.355780987 container died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:36:14 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.timer: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b-userdata-shm.mount: Deactivated successfully. Dec 2 04:36:14 localhost podman[237004]: 2025-12-02 09:36:14.140901684 +0000 UTC m=+0.467598706 container cleanup a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:36:14 localhost podman[237004]: ceilometer_agent_compute Dec 2 04:36:14 localhost podman[237035]: 2025-12-02 09:36:14.236082958 +0000 UTC m=+0.056785198 container cleanup a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 2 04:36:14 localhost podman[237035]: ceilometer_agent_compute Dec 2 04:36:14 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 2 04:36:14 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 04:36:14 localhost systemd[1]: Started libcrun container. Dec 2 04:36:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:14 localhost podman[237046]: 2025-12-02 09:36:14.376842762 +0000 UTC m=+0.109660893 container init a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + sudo -E kolla_set_configs Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:14 localhost podman[237046]: 2025-12-02 09:36:14.419415229 +0000 UTC m=+0.152233270 container start a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:36:14 localhost podman[237046]: ceilometer_agent_compute Dec 2 04:36:14 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Validating config file Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Copying service configuration files Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: INFO:__main__:Writing out command to execute Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: ++ cat /run_command Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + ARGS= Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + sudo kolla_copy_cacerts Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + [[ ! -n '' ]] Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + . kolla_extend_start Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + umask 0022 Dec 2 04:36:14 localhost ceilometer_agent_compute[237061]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 2 04:36:14 localhost podman[237070]: 2025-12-02 09:36:14.480671804 +0000 UTC m=+0.055685414 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:36:14 localhost podman[237070]: 2025-12-02 09:36:14.513805649 +0000 UTC m=+0.088819179 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:36:14 localhost podman[237070]: unhealthy Dec 2 04:36:14 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:14 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.200 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.232 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.233 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.235 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.253 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.426 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.431 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:36:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47419 DF PROTO=TCP SPT=53308 DPT=9105 SEQ=4237463935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548C5220000000001030307) Dec 2 04:36:16 localhost python3.9[237205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:16 localhost python3.9[237293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668175.5734465-1547-62652876652314/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:17 localhost python3.9[237403]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Dec 2 04:36:18 localhost python3.9[237513]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50881 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548D1220000000001030307) Dec 2 04:36:19 localhost python3[237623]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:19 localhost podman[237660]: Dec 2 04:36:19 localhost podman[237660]: 2025-12-02 09:36:19.395353461 +0000 UTC m=+0.088727665 container create 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors ) Dec 2 04:36:19 localhost podman[237660]: 2025-12-02 09:36:19.352124194 +0000 UTC m=+0.045498468 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 2 04:36:19 localhost python3[237623]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Dec 2 04:36:20 localhost python3.9[237808]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:36:21 localhost python3.9[237928]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:22 localhost python3.9[238097]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668181.7171264-1705-229647947346802/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:22 localhost python3.9[238152]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:36:22 localhost systemd[1]: Reloading. Dec 2 04:36:23 localhost systemd-rc-local-generator[238178]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:23 localhost systemd-sysv-generator[238182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47557 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548E1220000000001030307) Dec 2 04:36:23 localhost python3.9[238260]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:36:23 localhost systemd[1]: Reloading. Dec 2 04:36:24 localhost systemd-rc-local-generator[238286]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:24 localhost systemd-sysv-generator[238290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: Starting node_exporter container... Dec 2 04:36:24 localhost systemd[1]: Started libcrun container. Dec 2 04:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:24 localhost podman[238300]: 2025-12-02 09:36:24.375480755 +0000 UTC m=+0.140365943 container init 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=arp Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=bcache Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=bonding Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=btrfs Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=conntrack Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=cpu Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=cpufreq Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=diskstats Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=edac Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=fibrechannel Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=filefd Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=filesystem Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=infiniband Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=ipvs Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=loadavg Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=mdadm Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=meminfo Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netclass Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netdev Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netstat Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nfs Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nfsd Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nvme Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=schedstat Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=sockstat Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=softnet Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=systemd Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=tapestats Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=udp_queues Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=vmstat Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=xfs Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=zfs Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.399Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Dec 2 04:36:24 localhost node_exporter[238314]: ts=2025-12-02T09:36:24.399Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Dec 2 04:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:24 localhost podman[238300]: 2025-12-02 09:36:24.420417495 +0000 UTC m=+0.185302643 container start 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:24 localhost podman[238300]: node_exporter Dec 2 04:36:24 localhost systemd[1]: Started node_exporter container. Dec 2 04:36:24 localhost podman[238323]: 2025-12-02 09:36:24.520694437 +0000 UTC m=+0.094705171 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:24 localhost podman[238323]: 2025-12-02 09:36:24.554766321 +0000 UTC m=+0.128777075 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:36:24 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:36:25 localhost python3.9[238453]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:36:25 localhost systemd[1]: Stopping node_exporter container... Dec 2 04:36:25 localhost systemd[1]: tmp-crun.5lhpMO.mount: Deactivated successfully. Dec 2 04:36:25 localhost systemd[1]: libpod-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully. Dec 2 04:36:25 localhost podman[238457]: 2025-12-02 09:36:25.340748304 +0000 UTC m=+0.075379993 container died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:36:25 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.timer: Deactivated successfully. Dec 2 04:36:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6-userdata-shm.mount: Deactivated successfully. Dec 2 04:36:25 localhost podman[238457]: 2025-12-02 09:36:25.393168205 +0000 UTC m=+0.127799864 container cleanup 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:36:25 localhost podman[238457]: node_exporter Dec 2 04:36:25 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 2 04:36:25 localhost podman[238484]: 2025-12-02 09:36:25.476318267 +0000 UTC m=+0.054180896 container cleanup 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:36:25 localhost podman[238484]: node_exporter Dec 2 04:36:25 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Dec 2 04:36:25 localhost systemd[1]: Stopped node_exporter container. Dec 2 04:36:25 localhost systemd[1]: Starting node_exporter container... Dec 2 04:36:25 localhost systemd[1]: Started libcrun container. Dec 2 04:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:25 localhost podman[238496]: 2025-12-02 09:36:25.636150111 +0000 UTC m=+0.129840927 container init 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=arp Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=bcache Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=bonding Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=btrfs Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=conntrack Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=cpu Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=cpufreq Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=diskstats Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=edac Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=fibrechannel Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=filefd Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=filesystem Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=infiniband Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=ipvs Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=loadavg Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=mdadm Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=meminfo Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netclass Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netdev Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netstat Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nfs Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nfsd Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nvme Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=schedstat Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=sockstat Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=softnet Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=systemd Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=tapestats Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=udp_queues Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=vmstat Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=xfs Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=zfs Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.651Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Dec 2 04:36:25 localhost node_exporter[238511]: ts=2025-12-02T09:36:25.651Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Dec 2 04:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:25 localhost podman[238496]: 2025-12-02 09:36:25.671630049 +0000 UTC m=+0.165320835 container start 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:36:25 localhost podman[238496]: node_exporter Dec 2 04:36:25 localhost systemd[1]: Started node_exporter container. Dec 2 04:36:25 localhost podman[238521]: 2025-12-02 09:36:25.734548205 +0000 UTC m=+0.061514204 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:25 localhost podman[238521]: 2025-12-02 09:36:25.768369951 +0000 UTC m=+0.095335980 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:36:25 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:36:26 localhost python3.9[238652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:26 localhost python3.9[238740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668185.9031549-1802-181690434975877/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47558 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548F0E30000000001030307) Dec 2 04:36:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9458 DF PROTO=TCP SPT=41292 DPT=9105 SEQ=4090967844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548F3230000000001030307) Dec 2 04:36:27 localhost python3.9[238850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Dec 2 04:36:28 localhost python3.9[238960]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:29 localhost python3[239070]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44083 DF PROTO=TCP SPT=33812 DPT=9102 SEQ=3103970790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54901230000000001030307) Dec 2 04:36:32 localhost podman[239084]: 2025-12-02 09:36:29.590020268 +0000 UTC m=+0.032738663 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 2 04:36:32 localhost podman[239157]: Dec 2 04:36:32 localhost podman[239157]: 2025-12-02 09:36:32.289421712 +0000 UTC m=+0.083897157 container create 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi ) Dec 2 04:36:32 localhost podman[239157]: 2025-12-02 09:36:32.251874541 +0000 UTC m=+0.046350006 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 2 04:36:32 localhost python3[239070]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 2 04:36:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15796 DF PROTO=TCP SPT=41480 DPT=9882 SEQ=3523525307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5490A210000000001030307) Dec 2 04:36:33 localhost python3.9[239305]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:36:34 localhost podman[239325]: 2025-12-02 09:36:34.079872856 +0000 UTC m=+0.081306756 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:36:34 localhost podman[239325]: 2025-12-02 09:36:34.090350021 +0000 UTC m=+0.091783921 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:36:34 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:36:34 localhost python3.9[239435]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:35 localhost python3.9[239544]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668194.7149575-1960-175626787906577/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:35 localhost python3.9[239599]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:36:35 localhost systemd[1]: Reloading. Dec 2 04:36:36 localhost systemd-rc-local-generator[239624]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:36 localhost systemd-sysv-generator[239627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15798 DF PROTO=TCP SPT=41480 DPT=9882 SEQ=3523525307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54916220000000001030307) Dec 2 04:36:36 localhost python3.9[239690]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:36:36 localhost systemd[1]: Reloading. Dec 2 04:36:37 localhost systemd-sysv-generator[239719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:37 localhost systemd-rc-local-generator[239715]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:37 localhost systemd[1]: Starting podman_exporter container... Dec 2 04:36:37 localhost systemd[1]: Started libcrun container. Dec 2 04:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:36:37 localhost podman[239731]: 2025-12-02 09:36:37.419721059 +0000 UTC m=+0.146922005 container init 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:36:37 localhost podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Dec 2 04:36:37 localhost podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=exporter.go:69 level=info msg=metrics enhanced=false Dec 2 04:36:37 localhost podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=handler.go:94 level=info msg="enabled collectors" Dec 2 04:36:37 localhost podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=handler.go:105 level=info collector=container Dec 2 04:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:36:37 localhost podman[239731]: 2025-12-02 09:36:37.451308317 +0000 UTC m=+0.178509273 container start 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:37 localhost podman[239731]: podman_exporter Dec 2 04:36:37 localhost systemd[1]: Starting Podman API Service... Dec 2 04:36:37 localhost systemd[1]: Started podman_exporter container. Dec 2 04:36:37 localhost systemd[1]: Started Podman API Service. Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="/usr/bin/podman filtering at log level info" Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Setting parallel job count to 25" Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Using systemd socket activation to determine API endpoint" Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Dec 2 04:36:37 localhost podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Dec 2 04:36:37 localhost podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:36:37 localhost podman[239756]: 2025-12-02 09:36:37.527671889 +0000 UTC m=+0.069803940 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:37 localhost podman[239756]: 2025-12-02 09:36:37.537235675 +0000 UTC m=+0.079367736 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:36:37 localhost podman[239756]: unhealthy Dec 2 04:36:37 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:37 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:36:38 localhost python3.9[239900]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:36:38 localhost systemd[1]: Stopping podman_exporter container... Dec 2 04:36:38 localhost podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1" Dec 2 04:36:38 localhost systemd[1]: libpod-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully. Dec 2 04:36:38 localhost podman[239904]: 2025-12-02 09:36:38.283949004 +0000 UTC m=+0.054961922 container died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:36:38 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.timer: Deactivated successfully. Dec 2 04:36:38 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0-userdata-shm.mount: Deactivated successfully. Dec 2 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27297 DF PROTO=TCP SPT=36326 DPT=9100 SEQ=1793506217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54923220000000001030307) Dec 2 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-b567875599537fe8ddd2e294c4bc2f350557061ef7c40059b2f561379ea2a798-merged.mount: Deactivated successfully. Dec 2 04:36:40 localhost podman[239904]: 2025-12-02 09:36:40.587275984 +0000 UTC m=+2.358288892 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:36:40 localhost podman[239904]: podman_exporter Dec 2 04:36:40 localhost podman[239918]: 2025-12-02 09:36:40.599077049 +0000 UTC m=+2.307819821 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:42 localhost sshd[239931]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:36:42 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 2 04:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:36:42 localhost podman[239939]: 2025-12-02 09:36:42.862143294 +0000 UTC m=+0.077530849 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:42 localhost podman[239939]: podman_exporter Dec 2 04:36:42 localhost podman[239933]: 2025-12-02 09:36:42.842640021 +0000 UTC m=+0.091820712 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:36:42 localhost podman[239933]: 2025-12-02 09:36:42.922796621 +0000 UTC m=+0.171977262 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 04:36:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61735 DF PROTO=TCP SPT=46410 DPT=9105 SEQ=1950222307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5492F230000000001030307) Dec 2 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Dec 2 04:36:44 localhost systemd[1]: Stopped podman_exporter container. Dec 2 04:36:44 localhost systemd[1]: Starting podman_exporter container... Dec 2 04:36:44 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:36:44 localhost podman[239942]: 2025-12-02 09:36:44.071722641 +0000 UTC m=+1.281932916 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:36:44 localhost podman[239942]: 2025-12-02 09:36:44.110724578 +0000 UTC m=+1.320934843 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:36:44 localhost systemd[1]: Started libcrun container. Dec 2 04:36:44 localhost podman[239999]: 2025-12-02 09:36:44.76255347 +0000 UTC m=+0.142692925 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:36:44 localhost podman[239999]: 2025-12-02 09:36:44.797722089 +0000 UTC m=+0.177861484 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:36:44 localhost podman[239999]: unhealthy Dec 2 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:36:44 localhost podman[239974]: 2025-12-02 09:36:44.844099954 +0000 UTC m=+0.800519324 container init 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:36:44 localhost podman_exporter[240012]: ts=2025-12-02T09:36:44.859Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Dec 2 04:36:44 localhost podman_exporter[240012]: ts=2025-12-02T09:36:44.859Z caller=exporter.go:69 level=info msg=metrics enhanced=false Dec 2 04:36:44 localhost podman[239757]: @ - - [02/Dec/2025:09:36:44 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Dec 2 04:36:44 localhost podman[239757]: time="2025-12-02T09:36:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:36:44 localhost podman_exporter[240012]: ts=2025-12-02T09:36:44.860Z caller=handler.go:94 level=info msg="enabled collectors" Dec 2 04:36:44 localhost podman_exporter[240012]: ts=2025-12-02T09:36:44.860Z caller=handler.go:105 level=info collector=container Dec 2 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:36:44 localhost podman[239974]: 2025-12-02 09:36:44.925914014 +0000 UTC m=+0.882333394 container start 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:36:44 localhost podman[239974]: podman_exporter Dec 2 04:36:45 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:45 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:36:45 localhost systemd[1]: Started podman_exporter container. Dec 2 04:36:45 localhost podman[240029]: 2025-12-02 09:36:45.098899185 +0000 UTC m=+0.217562991 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:45 localhost podman[240029]: 2025-12-02 09:36:45.108789651 +0000 UTC m=+0.227453467 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:36:45 localhost podman[240029]: unhealthy Dec 2 04:36:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24324 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5493A7E0000000001030307) Dec 2 04:36:47 localhost python3.9[240162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully. Dec 2 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully. Dec 2 04:36:47 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:47 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:36:47 localhost python3.9[240250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668206.748281-2057-170889118739154/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:48 localhost python3.9[240360]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Dec 2 04:36:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24326 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54946A20000000001030307) Dec 2 04:36:49 localhost python3.9[240470]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:50 localhost python3[240580]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:50 localhost podman[239757]: time="2025-12-02T09:36:50Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged: invalid argument" Dec 2 04:36:50 localhost podman[239757]: time="2025-12-02T09:36:50Z" level=error msg="Getting root fs size for \"04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": creating overlay mount to /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/RTTUT65UCTAX2SKDGZLIU47BZC:/var/lib/containers/storage/overlay/l/RHRLV6JCIALNITZ4KR55MMCPF5:/var/lib/containers/storage/overlay/l/LK2RBR3EFG2ZZO4YQKJAOD6X6T,upperdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/diff,workdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/work,nodev,metacopy=on\": no such file or directory" Dec 2 04:36:50 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:36:50 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:36:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47560 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54951220000000001030307) Dec 2 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:36:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:36:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully. Dec 2 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:36:56 localhost podman[240609]: 2025-12-02 09:36:56.072084387 +0000 UTC m=+0.073413625 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:36:56 localhost podman[240609]: 2025-12-02 09:36:56.083422493 +0000 UTC m=+0.084751731 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35409 DF PROTO=TCP SPT=41556 DPT=9101 SEQ=2170108911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54966230000000001030307) Dec 2 04:36:57 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully. Dec 2 04:36:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:36:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24328 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54977220000000001030307) Dec 2 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:37:03.145 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:37:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:37:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:37:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:37:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:37:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14596 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5497F510000000001030307) Dec 2 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:04 localhost podman[240645]: 2025-12-02 09:37:04.360243986 +0000 UTC m=+0.076530579 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:37:04 localhost podman[240645]: 2025-12-02 09:37:04.373972795 +0000 UTC m=+0.090259418 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 04:37:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14597 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54983620000000001030307) Dec 2 04:37:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:04 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.923 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.923 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.943 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.944 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.944 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:04 localhost nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:37:05 localhost systemd[1]: tmp-crun.NrHf4q.mount: Deactivated successfully. Dec 2 04:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully. Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.671 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.671 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.672 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.707 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:37:05 localhost nova_compute[229585]: 2025-12-02 09:37:05.709 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.184 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.391 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13212MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.468 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.469 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.485 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:37:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14598 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5498B620000000001030307) Dec 2 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.938 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.943 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.959 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.962 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:37:06 localhost nova_compute[229585]: 2025-12-02 09:37:06.962 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully. Dec 2 04:37:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19249 DF PROTO=TCP SPT=42758 DPT=9100 SEQ=1761696839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54999220000000001030307) Dec 2 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:37:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30359 DF PROTO=TCP SPT=39692 DPT=9105 SEQ=824406069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549A4620000000001030307) Dec 2 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully. Dec 2 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:37:14 localhost podman[240719]: 2025-12-02 09:37:14.212057246 +0000 UTC m=+0.077096887 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:37:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:14 localhost podman[240719]: 2025-12-02 09:37:14.250829501 +0000 UTC m=+0.115869132 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:37:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:37:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2810 DF PROTO=TCP SPT=52684 DPT=9102 SEQ=1994394866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549AFAE0000000001030307) Dec 2 04:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:37:16 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:37:16 localhost podman[240736]: 2025-12-02 09:37:16.553587282 +0000 UTC m=+1.801506999 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Dec 2 04:37:16 localhost podman[240747]: 2025-12-02 09:37:16.620330131 +0000 UTC m=+1.495154579 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:37:16 localhost podman[240736]: 2025-12-02 09:37:16.652861794 +0000 UTC m=+1.900781521 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:37:16 localhost podman[240747]: 2025-12-02 09:37:16.703441079 +0000 UTC m=+1.578265517 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:37:16 localhost podman[240747]: unhealthy Dec 2 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:18 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:37:18 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:37:18 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:37:18 localhost podman[240595]: 2025-12-02 09:36:50.801435865 +0000 UTC m=+0.028708429 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Dec 2 04:37:18 localhost podman[240787]: 2025-12-02 09:37:18.380444693 +0000 UTC m=+0.382881738 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:37:18 localhost podman[240787]: 2025-12-02 09:37:18.419811776 +0000 UTC m=+0.422248761 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:37:18 localhost podman[240787]: unhealthy Dec 2 04:37:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14600 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549BB220000000001030307) Dec 2 04:37:19 localhost sshd[240810]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:19 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:37:19 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully. Dec 2 04:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully. Dec 2 04:37:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35411 DF PROTO=TCP SPT=41556 DPT=9101 SEQ=2170108911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549C7220000000001030307) Dec 2 04:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:24 localhost podman[240836]: 2025-12-02 09:37:22.109528469 +0000 UTC m=+0.044936473 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Dec 2 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4292 DF PROTO=TCP SPT=45518 DPT=9101 SEQ=3058026625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549DB230000000001030307) Dec 2 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully. Dec 2 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:27 localhost podman[240933]: 2025-12-02 09:37:27.610699724 +0000 UTC m=+0.086575996 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:37:27 localhost podman[240933]: 2025-12-02 09:37:27.626942241 +0000 UTC m=+0.102818583 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30362 DF PROTO=TCP SPT=39692 DPT=9105 SEQ=824406069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549DD220000000001030307) Dec 2 04:37:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:28 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:28 localhost systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully. Dec 2 04:37:28 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:30 localhost systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully. Dec 2 04:37:30 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2814 DF PROTO=TCP SPT=52684 DPT=9102 SEQ=1994394866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549EB220000000001030307) Dec 2 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully. Dec 2 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully. Dec 2 04:37:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4612 DF PROTO=TCP SPT=37402 DPT=9882 SEQ=4210985897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549F4810000000001030307) Dec 2 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:37:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:37:35 localhost podman[240954]: 2025-12-02 09:37:35.091083535 +0000 UTC m=+0.099536403 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:37:35 localhost podman[240954]: 2025-12-02 09:37:35.105879227 +0000 UTC m=+0.114332055 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully. Dec 2 04:37:36 localhost podman[240836]: Dec 2 04:37:36 localhost podman[240836]: 2025-12-02 09:37:36.50460889 +0000 UTC m=+14.440016874 container create bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350) Dec 2 04:37:36 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:37:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4614 DF PROTO=TCP SPT=37402 DPT=9882 SEQ=4210985897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A00A20000000001030307) Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:37 localhost python3[240580]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:39 localhost python3.9[241107]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42-merged.mount: Deactivated successfully. Dec 2 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 2 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 2 04:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully. Dec 2 04:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52399 DF PROTO=TCP SPT=45552 DPT=9100 SEQ=4105249519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A0F220000000001030307) Dec 2 04:37:40 localhost python3.9[241219]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:37:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:41 localhost python3.9[241328]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668260.571447-2215-121249282091281/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:41 localhost python3.9[241383]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:37:41 localhost systemd[1]: Reloading. Dec 2 04:37:41 localhost systemd-rc-local-generator[241407]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:37:41 localhost systemd-sysv-generator[241410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully. Dec 2 04:37:42 localhost python3.9[241475]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:37:42 localhost systemd[1]: Reloading. Dec 2 04:37:42 localhost systemd-sysv-generator[241504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:37:42 localhost systemd-rc-local-generator[241500]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:37:42 localhost systemd[1]: Starting openstack_network_exporter container... Dec 2 04:37:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53188 DF PROTO=TCP SPT=55894 DPT=9105 SEQ=311588103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A19A20000000001030307) Dec 2 04:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully. Dec 2 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully. Dec 2 04:37:44 localhost systemd[1]: Started libcrun container. Dec 2 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Dec 2 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:37:44 localhost podman[241516]: 2025-12-02 09:37:44.689959987 +0000 UTC m=+1.733630404 container init bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *bridge.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *coverage.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *datapath.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *iface.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *memory.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *ovnnorthd.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *ovn.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *ovsdbserver.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *pmd_perf.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *pmd_rxq.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: INFO 09:37:44 main.go:48: registering *vswitch.Collector Dec 2 04:37:44 localhost openstack_network_exporter[241530]: NOTICE 09:37:44 main.go:82: listening on http://:9105/metrics Dec 2 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:37:44 localhost podman[241516]: 2025-12-02 09:37:44.727865065 +0000 UTC m=+1.771535482 container start bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible) Dec 2 04:37:44 localhost podman[241516]: openstack_network_exporter Dec 2 04:37:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17613 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A24DE0000000001030307) Dec 2 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully. Dec 2 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully. Dec 2 04:37:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:47 localhost systemd[1]: Started openstack_network_exporter container. Dec 2 04:37:47 localhost podman[241540]: 2025-12-02 09:37:47.172855522 +0000 UTC m=+2.438736477 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:37:47 localhost podman[241540]: 2025-12-02 09:37:47.25332922 +0000 UTC m=+2.519210145 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible) Dec 2 04:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:47 localhost python3.9[241683]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:37:48 localhost systemd[1]: Stopping openstack_network_exporter container... Dec 2 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:48 localhost sshd[241720]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:37:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17615 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A30E20000000001030307) Dec 2 04:37:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:37:49 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:37:49 localhost podman[241698]: 2025-12-02 09:37:49.245370999 +0000 UTC m=+0.410008708 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm) Dec 2 04:37:49 localhost systemd[1]: libpod-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully. Dec 2 04:37:49 localhost podman[241698]: 2025-12-02 09:37:49.282370759 +0000 UTC m=+0.447008418 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:37:49 localhost podman[241698]: unhealthy Dec 2 04:37:49 localhost podman[241699]: 2025-12-02 09:37:49.338995919 +0000 UTC m=+0.500687378 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:37:49 localhost podman[241687]: 2025-12-02 09:37:49.342609199 +0000 UTC m=+1.313691605 container died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Dec 2 04:37:49 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.timer: Deactivated successfully. Dec 2 04:37:49 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:37:49 localhost podman[241699]: 2025-12-02 09:37:49.397812686 +0000 UTC m=+0.559504175 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:37:49 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be-userdata-shm.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:37:50 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:37:50 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:37:50 localhost podman[241754]: 2025-12-02 09:37:50.460765459 +0000 UTC m=+1.087213456 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:37:50 localhost podman[241687]: 2025-12-02 09:37:50.46896268 +0000 UTC m=+2.440045106 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7) Dec 2 04:37:50 localhost podman[241687]: openstack_network_exporter Dec 2 04:37:50 localhost podman[241732]: 2025-12-02 09:37:50.479056359 +0000 UTC m=+1.186297944 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:37:50 localhost podman[241754]: 2025-12-02 09:37:50.493023185 +0000 UTC m=+1.119471202 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:37:50 localhost podman[241754]: unhealthy Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154-merged.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:51 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 2 04:37:51 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:37:51 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:37:51 localhost podman[241783]: 2025-12-02 09:37:51.156848246 +0000 UTC m=+0.057724675 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350) Dec 2 04:37:51 localhost podman[241783]: openstack_network_exporter Dec 2 04:37:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4294 DF PROTO=TCP SPT=45518 DPT=9101 SEQ=3058026625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A3B220000000001030307) Dec 2 04:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully. Dec 2 04:37:52 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'. Dec 2 04:37:52 localhost systemd[1]: Stopped openstack_network_exporter container. Dec 2 04:37:52 localhost systemd[1]: Starting openstack_network_exporter container... Dec 2 04:37:52 localhost podman[241553]: 2025-12-02 09:37:52.777083945 +0000 UTC m=+6.025152594 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:37:52 localhost podman[241553]: 2025-12-02 09:37:52.815796108 +0000 UTC m=+6.063864787 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 2 04:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully. Dec 2 04:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully. Dec 2 04:37:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:55 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:37:55 localhost systemd[1]: Started libcrun container. Dec 2 04:37:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 04:37:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Dec 2 04:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:37:55 localhost podman[241797]: 2025-12-02 09:37:55.340624843 +0000 UTC m=+2.570477421 container init bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41) Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *bridge.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *coverage.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *datapath.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *iface.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *memory.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *ovnnorthd.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *ovn.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *ovsdbserver.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *pmd_perf.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *pmd_rxq.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: INFO 09:37:55 main.go:48: registering *vswitch.Collector Dec 2 04:37:55 localhost openstack_network_exporter[241816]: NOTICE 09:37:55 main.go:82: listening on http://:9105/metrics Dec 2 04:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:37:55 localhost podman[241797]: 2025-12-02 09:37:55.374554049 +0000 UTC m=+2.604406637 container start bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41) Dec 2 04:37:55 localhost podman[241797]: openstack_network_exporter Dec 2 04:37:55 localhost podman[239757]: time="2025-12-02T09:37:55Z" level=error msg="Getting root fs size for \"2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Dec 2 04:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:37:56 localhost systemd[1]: Started openstack_network_exporter container. Dec 2 04:37:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:56 localhost podman[241826]: 2025-12-02 09:37:56.139335004 +0000 UTC m=+0.759460783 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git) Dec 2 04:37:56 localhost podman[241826]: 2025-12-02 09:37:56.184803074 +0000 UTC m=+0.804928793 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:37:57 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:37:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7545 DF PROTO=TCP SPT=40806 DPT=9101 SEQ=3362580343 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A50630000000001030307) Dec 2 04:37:57 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:57 localhost python3.9[241953]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 2 04:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully. Dec 2 04:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:37:59 localhost systemd[1]: tmp-crun.M69nHR.mount: Deactivated successfully. Dec 2 04:37:59 localhost podman[241971]: 2025-12-02 09:37:59.102658732 +0000 UTC m=+0.104763676 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:37:59 localhost podman[241971]: 2025-12-02 09:37:59.1400229 +0000 UTC m=+0.142127804 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:37:59 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:59 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:37:59 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:37:59 localhost podman[239757]: time="2025-12-02T09:37:59Z" level=error msg="Getting root fs size for \"306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c\": unmounting layer 4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36: replacing mount point \"/var/lib/containers/storage/overlay/4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36/merged\": device or resource busy" Dec 2 04:37:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:38:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17617 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A61220000000001030307) Dec 2 04:38:02 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:02 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:02 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:38:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:38:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:38:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:38:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:38:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34743 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A69B10000000001030307) Dec 2 04:38:04 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:04 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34744 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A6DA30000000001030307) Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.931 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:04 localhost nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:38:05 localhost nova_compute[229585]: 2025-12-02 09:38:05.665 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.088 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.247 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.248 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13206MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.248 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.249 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.334 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.334 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.354 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:06 localhost podman[242036]: 2025-12-02 09:38:06.630875159 +0000 UTC m=+0.080516983 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:38:06 localhost podman[242036]: 2025-12-02 09:38:06.664880752 +0000 UTC m=+0.114522576 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:38:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34745 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A75A30000000001030307) Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.814 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.821 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.838 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.840 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:38:06 localhost nova_compute[229585]: 2025-12-02 09:38:06.841 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:38:07 localhost systemd[1]: tmp-crun.JJCrz5.mount: Deactivated successfully. Dec 2 04:38:07 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:07 localhost systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully. Dec 2 04:38:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:07 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:38:07 localhost nova_compute[229585]: 2025-12-02 09:38:07.841 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:38:07 localhost nova_compute[229585]: 2025-12-02 09:38:07.842 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:38:07 localhost nova_compute[229585]: 2025-12-02 09:38:07.842 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:38:07 localhost nova_compute[229585]: 2025-12-02 09:38:07.855 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23105 DF PROTO=TCP SPT=53472 DPT=9100 SEQ=2106322161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A83230000000001030307) Dec 2 04:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:11 localhost systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully. Dec 2 04:38:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:38:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52400 DF PROTO=TCP SPT=45552 DPT=9100 SEQ=4105249519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A8D220000000001030307) Dec 2 04:38:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Dec 2 04:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.432 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:38:15 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:38:15 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:38:15 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:38:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42430 DF PROTO=TCP SPT=50318 DPT=9102 SEQ=3794331324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A9A110000000001030307) Dec 2 04:38:16 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:16 localhost systemd[1]: var-lib-containers-storage-overlay-47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82-merged.mount: Deactivated successfully. Dec 2 04:38:17 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34747 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AA5220000000001030307) Dec 2 04:38:19 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:38:19 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:38:20 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:20 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:38:21 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:38:21 localhost systemd[1]: tmp-crun.hNaSSy.mount: Deactivated successfully. Dec 2 04:38:21 localhost podman[242057]: 2025-12-02 09:38:21.072933145 +0000 UTC m=+0.072985601 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Dec 2 04:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:38:21 localhost podman[242056]: 2025-12-02 09:38:21.141689334 +0000 UTC m=+0.145641091 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:38:21 localhost podman[242057]: 2025-12-02 09:38:21.143018046 +0000 UTC m=+0.143070522 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:38:21 localhost podman[242056]: 2025-12-02 09:38:21.225843447 +0000 UTC m=+0.229795194 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute) Dec 2 04:38:21 localhost podman[242056]: unhealthy Dec 2 04:38:21 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:38:21 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:38:21 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:38:21 localhost podman[242090]: 2025-12-02 09:38:21.404300055 +0000 UTC m=+0.275901479 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:38:21 localhost podman[242090]: 2025-12-02 09:38:21.437823764 +0000 UTC m=+0.309425228 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:38:21 localhost podman[242090]: unhealthy Dec 2 04:38:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7547 DF PROTO=TCP SPT=40806 DPT=9101 SEQ=3362580343 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AB1220000000001030307) Dec 2 04:38:22 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:38:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:22 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:38:22 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:38:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:24 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:38:24 localhost systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully. Dec 2 04:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully. Dec 2 04:38:25 localhost podman[242120]: 2025-12-02 09:38:25.594682606 +0000 UTC m=+0.099389332 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:38:25 localhost podman[242120]: 2025-12-02 09:38:25.623885592 +0000 UTC m=+0.128592288 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:38:26 localhost systemd[1]: tmp-crun.UYcgep.mount: Deactivated successfully. Dec 2 04:38:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:26 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:26 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:26 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:38:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:38:27 localhost systemd[1]: tmp-crun.cdbb8c.mount: Deactivated successfully. Dec 2 04:38:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12055 DF PROTO=TCP SPT=55838 DPT=9101 SEQ=938634073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AC5A20000000001030307) Dec 2 04:38:27 localhost podman[242175]: 2025-12-02 09:38:27.163817635 +0000 UTC m=+0.061953262 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Dec 2 04:38:27 localhost podman[242175]: 2025-12-02 09:38:27.200756619 +0000 UTC m=+0.098892206 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.) Dec 2 04:38:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:27 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:38:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:28 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:38:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:29 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:38:29 localhost podman[239757]: time="2025-12-02T09:38:29Z" level=error msg="Getting root fs size for \"64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Dec 2 04:38:29 localhost podman[242226]: 2025-12-02 09:38:29.775173533 +0000 UTC m=+0.088214118 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:38:29 localhost podman[242226]: 2025-12-02 09:38:29.816549873 +0000 UTC m=+0.129590408 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully. Dec 2 04:38:30 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:30 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:38:31 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:31 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:31 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42434 DF PROTO=TCP SPT=50318 DPT=9102 SEQ=3794331324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AD7230000000001030307) Dec 2 04:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:33 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:38:33 localhost systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully. Dec 2 04:38:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47766 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54ADEE10000000001030307) Dec 2 04:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82-merged.mount: Deactivated successfully. Dec 2 04:38:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47767 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AE2E20000000001030307) Dec 2 04:38:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:35 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:35 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:36 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:38:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47768 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AEAE20000000001030307) Dec 2 04:38:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:38:37 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:38:37 localhost podman[242267]: 2025-12-02 09:38:37.947596551 +0000 UTC m=+0.088564219 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible) Dec 2 04:38:37 localhost podman[242267]: 2025-12-02 09:38:37.958068162 +0000 UTC m=+0.099035790 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:38:38 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:38:38 localhost systemd[1]: tmp-crun.NVESwl.mount: Deactivated successfully. Dec 2 04:38:38 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42583 DF PROTO=TCP SPT=47492 DPT=9100 SEQ=3262698782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AF9230000000001030307) Dec 2 04:38:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:42 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:38:42 localhost systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully. Dec 2 04:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64106 DF PROTO=TCP SPT=54532 DPT=9105 SEQ=2084957773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B03E20000000001030307) Dec 2 04:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully. Dec 2 04:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully. Dec 2 04:38:44 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:38:44 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:38:44 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:38:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully. Dec 2 04:38:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53193 DF PROTO=TCP SPT=55894 DPT=9105 SEQ=311588103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B0F220000000001030307) Dec 2 04:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:38:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully. Dec 2 04:38:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47770 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B1B220000000001030307) Dec 2 04:38:49 localhost sshd[242286]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:38:51 localhost podman[242288]: 2025-12-02 09:38:51.473444986 +0000 UTC m=+0.080010577 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:38:51 localhost podman[242289]: 2025-12-02 09:38:51.540632997 +0000 UTC m=+0.143384351 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:38:51 localhost podman[242288]: 2025-12-02 09:38:51.563545551 +0000 UTC m=+0.170111242 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 04:38:51 localhost podman[242288]: unhealthy Dec 2 04:38:51 localhost podman[242289]: 2025-12-02 09:38:51.600042251 +0000 UTC m=+0.202793575 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 04:38:52 localhost systemd[1]: tmp-crun.3JBfZv.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:38:52 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:38:52 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:38:52 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:38:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:52 localhost podman[242330]: 2025-12-02 09:38:52.85051816 +0000 UTC m=+0.383278984 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:38:52 localhost podman[242330]: 2025-12-02 09:38:52.883018098 +0000 UTC m=+0.415778852 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:38:52 localhost podman[242330]: unhealthy Dec 2 04:38:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43163 DF PROTO=TCP SPT=36102 DPT=9102 SEQ=1319628034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B2B220000000001030307) Dec 2 04:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:53 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:38:53 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:56 localhost systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully. Dec 2 04:38:56 localhost systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully. Dec 2 04:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:38:57 localhost podman[242355]: 2025-12-02 09:38:57.084952793 +0000 UTC m=+0.089235150 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:38:57 localhost podman[242355]: 2025-12-02 09:38:57.094878497 +0000 UTC m=+0.099160934 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 04:38:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16474 DF PROTO=TCP SPT=58868 DPT=9101 SEQ=1055772396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B3AE20000000001030307) Dec 2 04:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully. Dec 2 04:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:38:57 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:38:58 localhost podman[239757]: time="2025-12-02T09:38:58Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged: invalid argument" Dec 2 04:38:58 localhost podman[239757]: time="2025-12-02T09:38:58Z" level=error msg="Getting root fs size for \"7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": creating overlay mount to /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/LK2RBR3EFG2ZZO4YQKJAOD6X6T,upperdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/diff,workdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/work,nodev,metacopy=on\": no such file or directory" Dec 2 04:38:58 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:58 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:38:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:38:59 localhost podman[242372]: 2025-12-02 09:38:59.053174371 +0000 UTC m=+0.061844649 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350) Dec 2 04:38:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:38:59 localhost podman[242372]: 2025-12-02 09:38:59.063483838 +0000 UTC m=+0.072154116 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9) Dec 2 04:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully. Dec 2 04:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:39:00 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 2 04:39:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully. Dec 2 04:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully. Dec 2 04:39:00 localhost podman[242393]: 2025-12-02 09:39:00.843889953 +0000 UTC m=+0.066852613 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:39:00 localhost podman[242393]: 2025-12-02 09:39:00.874544773 +0000 UTC m=+0.097507473 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:39:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43164 DF PROTO=TCP SPT=36102 DPT=9102 SEQ=1319628034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B4B220000000001030307) Dec 2 04:39:01 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:39:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:39:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:39:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:39:03 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:39:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51481 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B54110000000001030307) Dec 2 04:39:03 localhost nova_compute[229585]: 2025-12-02 09:39:03.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:39:04 localhost nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:04 localhost nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:04 localhost nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:04 localhost nova_compute[229585]: 2025-12-02 09:39:04.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:39:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51482 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B58220000000001030307) Dec 2 04:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:39:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:05 localhost nova_compute[229585]: 2025-12-02 09:39:05.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:05 localhost nova_compute[229585]: 2025-12-02 09:39:05.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:05 localhost nova_compute[229585]: 2025-12-02 09:39:05.658 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully. Dec 2 04:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully. Dec 2 04:39:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.660 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:39:06 localhost nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:39:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51483 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B60220000000001030307) Dec 2 04:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.125 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.279 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13164MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.340 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.341 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.355 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.820 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.826 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.840 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.842 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:39:07 localhost nova_compute[229585]: 2025-12-02 09:39:07.842 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:39:08 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:08 localhost nova_compute[229585]: 2025-12-02 09:39:08.843 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:08 localhost nova_compute[229585]: 2025-12-02 09:39:08.843 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:39:08 localhost nova_compute[229585]: 2025-12-02 09:39:08.844 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:39:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:39:08 localhost nova_compute[229585]: 2025-12-02 09:39:08.860 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:39:08 localhost nova_compute[229585]: 2025-12-02 09:39:08.860 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:39:08 localhost podman[242460]: 2025-12-02 09:39:08.920970323 +0000 UTC m=+0.060766734 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 04:39:08 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 2 04:39:08 localhost podman[242460]: 2025-12-02 09:39:08.935859027 +0000 UTC m=+0.075655438 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:39:09 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 2 04:39:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32856 DF PROTO=TCP SPT=60250 DPT=9100 SEQ=2569318876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B6D220000000001030307) Dec 2 04:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully. Dec 2 04:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully. Dec 2 04:39:10 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:39:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:12 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:12 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:39:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65227 DF PROTO=TCP SPT=53506 DPT=9105 SEQ=988059590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B79220000000001030307) Dec 2 04:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully. Dec 2 04:39:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully. Dec 2 04:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:39:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14153 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B846E0000000001030307) Dec 2 04:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 2 04:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 2 04:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 2 04:39:17 localhost systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully. Dec 2 04:39:17 localhost systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully. Dec 2 04:39:17 localhost systemd[1]: session-55.scope: Deactivated successfully. Dec 2 04:39:17 localhost systemd[1]: session-55.scope: Consumed 59.546s CPU time. Dec 2 04:39:17 localhost systemd-logind[760]: Session 55 logged out. Waiting for processes to exit. Dec 2 04:39:17 localhost systemd-logind[760]: Removed session 55. Dec 2 04:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14155 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B90620000000001030307) Dec 2 04:39:19 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Dec 2 04:39:19 localhost systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully. Dec 2 04:39:19 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:20 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Dec 2 04:39:20 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully. Dec 2 04:39:20 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 2 04:39:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16476 DF PROTO=TCP SPT=58868 DPT=9101 SEQ=1055772396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B9B220000000001030307) Dec 2 04:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:23 localhost podman[242479]: 2025-12-02 09:39:23.050655763 +0000 UTC m=+0.106633003 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:39:23 localhost podman[242479]: 2025-12-02 09:39:23.079964546 +0000 UTC m=+0.135941806 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:39:23 localhost podman[242479]: unhealthy Dec 2 04:39:23 localhost podman[242480]: 2025-12-02 09:39:23.09550136 +0000 UTC m=+0.148241212 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 2 04:39:23 localhost podman[242480]: 2025-12-02 09:39:23.200126631 +0000 UTC m=+0.252866493 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:39:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:23 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:39:23 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:39:23 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:39:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:39:24 localhost podman[242523]: 2025-12-02 09:39:24.07722446 +0000 UTC m=+0.081566139 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:39:24 localhost podman[242523]: 2025-12-02 09:39:24.088775622 +0000 UTC m=+0.093117301 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:39:24 localhost podman[242523]: unhealthy Dec 2 04:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully. Dec 2 04:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully. Dec 2 04:39:25 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:39:25 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:39:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52420 DF PROTO=TCP SPT=46650 DPT=9101 SEQ=1238365235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BAFE20000000001030307) Dec 2 04:39:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:39:27 localhost podman[242546]: 2025-12-02 09:39:27.581504031 +0000 UTC m=+0.087109617 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:39:27 localhost podman[242546]: 2025-12-02 09:39:27.619601653 +0000 UTC m=+0.125207309 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 2 04:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 2 04:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 2 04:39:29 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 2 04:39:29 localhost systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully. Dec 2 04:39:29 localhost systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully. Dec 2 04:39:29 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:39:30 localhost podman[242564]: 2025-12-02 09:39:30.246547018 +0000 UTC m=+0.092232763 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41) Dec 2 04:39:30 localhost podman[242564]: 2025-12-02 09:39:30.260803533 +0000 UTC m=+0.106489308 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal) Dec 2 04:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:39:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14157 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BC1230000000001030307) Dec 2 04:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:39:31 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:39:33 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 2 04:39:33 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:33 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:39:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56315 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BC9410000000001030307) Dec 2 04:39:33 localhost podman[242584]: 2025-12-02 09:39:33.638857685 +0000 UTC m=+0.071090049 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:39:33 localhost podman[242584]: 2025-12-02 09:39:33.649043906 +0000 UTC m=+0.081276270 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 2 04:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:34 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:39:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56316 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BCD630000000001030307) Dec 2 04:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:35 localhost podman[239757]: time="2025-12-02T09:39:35Z" level=error msg="Getting root fs size for \"a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy" Dec 2 04:39:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:36 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:39:36 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully. Dec 2 04:39:36 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56317 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BD5620000000001030307) Dec 2 04:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully. Dec 2 04:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 2 04:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully. Dec 2 04:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully. Dec 2 04:39:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:39 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:39 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 2 04:39:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64218 DF PROTO=TCP SPT=34694 DPT=9100 SEQ=10372801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BE3220000000001030307) Dec 2 04:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:40 localhost podman[242692]: 2025-12-02 09:39:40.513902267 +0000 UTC m=+0.087019156 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 04:39:40 localhost podman[242692]: 2025-12-02 09:39:40.526965475 +0000 UTC m=+0.100082374 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 04:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 2 04:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 2 04:39:41 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55788 DF PROTO=TCP SPT=37382 DPT=9105 SEQ=3322693370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BEE630000000001030307) Dec 2 04:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully. Dec 2 04:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 2 04:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64111 DF PROTO=TCP SPT=54532 DPT=9105 SEQ=2084957773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BF9220000000001030307) Dec 2 04:39:46 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 2 04:39:46 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Dec 2 04:39:46 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully. Dec 2 04:39:46 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully. Dec 2 04:39:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:48 localhost podman[239757]: time="2025-12-02T09:39:48Z" level=error msg="Getting root fs size for \"acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy" Dec 2 04:39:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56319 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C05220000000001030307) Dec 2 04:39:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:49 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:49 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 2 04:39:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully. Dec 2 04:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52422 DF PROTO=TCP SPT=46650 DPT=9101 SEQ=1238365235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C11230000000001030307) Dec 2 04:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully. Dec 2 04:39:52 localhost sshd[242711]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:39:53 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 2 04:39:53 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:39:53 localhost systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully. Dec 2 04:39:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:39:54 localhost podman[242713]: 2025-12-02 09:39:54.153255191 +0000 UTC m=+0.156985288 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:39:54 localhost podman[242713]: 2025-12-02 09:39:54.202562846 +0000 UTC m=+0.206292933 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:39:54 localhost podman[242713]: unhealthy Dec 2 04:39:54 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:39:54 localhost systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully. Dec 2 04:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 2 04:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 2 04:39:55 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:39:55 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:39:56 localhost podman[242740]: 2025-12-02 09:39:56.074611458 +0000 UTC m=+0.083170038 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:39:56 localhost podman[242740]: 2025-12-02 09:39:56.120973152 +0000 UTC m=+0.129531742 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:39:56 localhost podman[242740]: unhealthy Dec 2 04:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 2 04:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 2 04:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 2 04:39:56 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:39:56 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:39:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37697 DF PROTO=TCP SPT=59168 DPT=9101 SEQ=347695817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C25220000000001030307) Dec 2 04:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 2 04:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:39:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55791 DF PROTO=TCP SPT=37382 DPT=9105 SEQ=3322693370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C27230000000001030307) Dec 2 04:39:58 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:58 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:39:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:39:58 localhost podman[242714]: 2025-12-02 09:39:58.175077036 +0000 UTC m=+4.173501311 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller) Dec 2 04:39:58 localhost podman[242714]: 2025-12-02 09:39:58.250776865 +0000 UTC m=+4.249201110 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:39:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:40:00 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:00 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:00 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:00 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:40:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:00 localhost podman[242778]: 2025-12-02 09:40:00.70095297 +0000 UTC m=+0.710174260 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:40:00 localhost podman[242778]: 2025-12-02 09:40:00.73475544 +0000 UTC m=+0.743976660 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:40:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14171 DF PROTO=TCP SPT=32858 DPT=9102 SEQ=3707033213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C35220000000001030307) Dec 2 04:40:01 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:02 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:40:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:02 localhost podman[242796]: 2025-12-02 09:40:02.604819512 +0000 UTC m=+0.615457540 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible) Dec 2 04:40:02 localhost podman[242796]: 2025-12-02 09:40:02.617827149 +0000 UTC m=+0.628465147 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:40:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:40:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:40:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:40:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:40:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:40:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36136 DF PROTO=TCP SPT=41450 DPT=9882 SEQ=1489999072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C3E710000000001030307) Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:03 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.763 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.766 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.766 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:40:03 localhost nova_compute[229585]: 2025-12-02 09:40:03.846 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:04 localhost nova_compute[229585]: 2025-12-02 09:40:04.859 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:04 localhost nova_compute[229585]: 2025-12-02 09:40:04.860 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:04 localhost nova_compute[229585]: 2025-12-02 09:40:04.860 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:40:05 localhost podman[242815]: 2025-12-02 09:40:05.102508496 +0000 UTC m=+0.106005454 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:40:05 localhost podman[242815]: 2025-12-02 09:40:05.141952129 +0000 UTC m=+0.145449047 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:40:05 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 2 04:40:05 localhost systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully. Dec 2 04:40:05 localhost systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully. Dec 2 04:40:06 localhost nova_compute[229585]: 2025-12-02 09:40:06.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:06 localhost nova_compute[229585]: 2025-12-02 09:40:06.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:06 localhost nova_compute[229585]: 2025-12-02 09:40:06.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36138 DF PROTO=TCP SPT=41450 DPT=9882 SEQ=1489999072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C4A630000000001030307) Dec 2 04:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully. Dec 2 04:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully. Dec 2 04:40:07 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:40:07 localhost nova_compute[229585]: 2025-12-02 09:40:07.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:07 localhost nova_compute[229585]: 2025-12-02 09:40:07.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:40:07 localhost nova_compute[229585]: 2025-12-02 09:40:07.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:40:07 localhost nova_compute[229585]: 2025-12-02 09:40:07.655 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:40:07 localhost nova_compute[229585]: 2025-12-02 09:40:07.655 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:08 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:40:08 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:40:08 localhost nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.070 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:40:09 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:09 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.225 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13066MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:40:09 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.327 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.327 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.392 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.460 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.460 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.477 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.501 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:40:09 localhost nova_compute[229585]: 2025-12-02 09:40:09.526 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:40:10 localhost nova_compute[229585]: 2025-12-02 09:40:10.000 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:40:10 localhost nova_compute[229585]: 2025-12-02 09:40:10.006 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:40:10 localhost nova_compute[229585]: 2025-12-02 09:40:10.033 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:40:10 localhost nova_compute[229585]: 2025-12-02 09:40:10.037 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:40:10 localhost nova_compute[229585]: 2025-12-02 09:40:10.037 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:40:10 localhost systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully. Dec 2 04:40:10 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21026 DF PROTO=TCP SPT=43298 DPT=9100 SEQ=1065710807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C59220000000001030307) Dec 2 04:40:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:10 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:40:10 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:40:11 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:40:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:11 localhost podman[242882]: 2025-12-02 09:40:11.515897111 +0000 UTC m=+0.079480788 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:40:11 localhost podman[242882]: 2025-12-02 09:40:11.531023538 +0000 UTC m=+0.094607285 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:40:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:12 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:12 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:40:12 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 2 04:40:13 localhost systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully. Dec 2 04:40:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39908 DF PROTO=TCP SPT=44272 DPT=9105 SEQ=2658120026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C63A30000000001030307) Dec 2 04:40:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:14 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 2 04:40:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:14 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:14 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 2 04:40:15 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:15 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:40:15 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:40:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23317 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C6ECF0000000001030307) Dec 2 04:40:16 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 2 04:40:16 localhost systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully. Dec 2 04:40:17 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:40:17 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:40:18 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:40:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23319 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C7AE20000000001030307) Dec 2 04:40:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:20 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:40:20 localhost systemd[1]: var-lib-containers-storage-overlay-e33240bb039d460ca33f381563cd1cbbc8c9cff68602bf3e8b26baddcb70d04b-merged.mount: Deactivated successfully. Dec 2 04:40:20 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:21 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37699 DF PROTO=TCP SPT=59168 DPT=9101 SEQ=347695817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C85230000000001030307) Dec 2 04:40:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:21 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:24 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:25 localhost systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully. Dec 2 04:40:25 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:25 localhost systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully. Dec 2 04:40:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:40:25 localhost podman[242898]: 2025-12-02 09:40:25.558695729 +0000 UTC m=+0.067605378 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:40:25 localhost podman[242898]: 2025-12-02 09:40:25.562293225 +0000 UTC m=+0.071202914 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 04:40:25 localhost podman[242898]: unhealthy Dec 2 04:40:25 localhost systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully. Dec 2 04:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:40:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65233 DF PROTO=TCP SPT=45234 DPT=9101 SEQ=2403759065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C9A620000000001030307) Dec 2 04:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:40:27 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:27 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:40:27 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:40:27 localhost podman[242915]: 2025-12-02 09:40:27.874938278 +0000 UTC m=+0.874984650 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:40:27 localhost podman[242915]: 2025-12-02 09:40:27.883863481 +0000 UTC m=+0.883909853 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:40:27 localhost podman[242915]: unhealthy Dec 2 04:40:28 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:29 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:29 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:40:29 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:40:30 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:40:30 localhost podman[242937]: 2025-12-02 09:40:30.805653013 +0000 UTC m=+0.084122446 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:40:30 localhost podman[242937]: 2025-12-02 09:40:30.86586764 +0000 UTC m=+0.144337103 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:40:31 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23321 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CAB230000000001030307) Dec 2 04:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:31 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:40:33 localhost podman[242961]: 2025-12-02 09:40:33.088574008 +0000 UTC m=+0.089408542 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 04:40:33 localhost podman[242961]: 2025-12-02 09:40:33.127076165 +0000 UTC m=+0.127910679 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:40:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29337 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CB3A10000000001030307) Dec 2 04:40:33 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:40:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:40:34 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:34 localhost systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully. Dec 2 04:40:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29338 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CB7A20000000001030307) Dec 2 04:40:34 localhost systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully. Dec 2 04:40:34 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:40:34 localhost podman[242979]: 2025-12-02 09:40:34.73509778 +0000 UTC m=+0.877917076 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41) Dec 2 04:40:34 localhost podman[242979]: 2025-12-02 09:40:34.779058168 +0000 UTC m=+0.921877494 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Dec 2 04:40:35 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:40:35 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:40:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29339 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CBFA30000000001030307) Dec 2 04:40:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:37 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:40:37 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:40:37 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:40:37 localhost podman[242999]: 2025-12-02 09:40:37.394814812 +0000 UTC m=+0.067425982 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:40:37 localhost podman[242999]: 2025-12-02 09:40:37.430740103 +0000 UTC m=+0.103351293 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:40:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:38 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:40:38 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 2 04:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:39 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:40:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57767 DF PROTO=TCP SPT=58558 DPT=9100 SEQ=3246799236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CCD220000000001030307) Dec 2 04:40:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 2 04:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-e33240bb039d460ca33f381563cd1cbbc8c9cff68602bf3e8b26baddcb70d04b-merged.mount: Deactivated successfully. Dec 2 04:40:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:40:42 localhost podman[243149]: 2025-12-02 09:40:42.301661673 +0000 UTC m=+0.096654436 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 2 04:40:42 localhost podman[243149]: 2025-12-02 09:40:42.3127425 +0000 UTC m=+0.107735263 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:40:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21027 DF PROTO=TCP SPT=43298 DPT=9100 SEQ=1065710807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CD7230000000001030307) Dec 2 04:40:43 localhost systemd[1]: tmp-crun.CAHgbC.mount: Deactivated successfully. Dec 2 04:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:43 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:44 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:40:44 localhost systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully. Dec 2 04:40:44 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:40:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14296 DF PROTO=TCP SPT=33926 DPT=9102 SEQ=3238031798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CE3FE0000000001030307) Dec 2 04:40:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:46 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:46 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:46 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:40:47 localhost sshd[243185]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:40:47 localhost systemd-logind[760]: New session 56 of user zuul. Dec 2 04:40:47 localhost systemd[1]: Started Session 56 of User zuul. Dec 2 04:40:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:47 localhost python3.9[243281]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman Dec 2 04:40:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29341 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CEF220000000001030307) Dec 2 04:40:50 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:40:50 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:40:50 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:40:51 localhost python3.9[243403]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:40:51 localhost systemd[1]: Started libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope. Dec 2 04:40:51 localhost podman[243404]: 2025-12-02 09:40:51.388021624 +0000 UTC m=+0.118025476 container exec c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:40:51 localhost podman[243404]: 2025-12-02 09:40:51.421817572 +0000 UTC m=+0.151821344 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:40:51 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:40:51 localhost systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully. Dec 2 04:40:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65235 DF PROTO=TCP SPT=45234 DPT=9101 SEQ=2403759065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CFB220000000001030307) Dec 2 04:40:52 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:40:52 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 2 04:40:53 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:53 localhost python3.9[243543]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:40:53 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 2 04:40:54 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:54 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:54 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:40:54 localhost systemd[1]: libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope: Deactivated successfully. Dec 2 04:40:54 localhost systemd[1]: Started libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope. Dec 2 04:40:55 localhost podman[243544]: 2025-12-02 09:40:55.008340183 +0000 UTC m=+1.933017084 container exec c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 04:40:55 localhost podman[243544]: 2025-12-02 09:40:55.041168263 +0000 UTC m=+1.965845194 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller) Dec 2 04:40:55 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:40:55 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 2 04:40:56 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:40:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9686 DF PROTO=TCP SPT=54012 DPT=9101 SEQ=1647305550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D0FA20000000001030307) Dec 2 04:40:57 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 2 04:40:57 localhost sshd[243645]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:40:57 localhost python3.9[243685]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:40:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:40:58 localhost systemd[1]: libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope: Deactivated successfully. Dec 2 04:40:58 localhost podman[243755]: 2025-12-02 09:40:58.190215775 +0000 UTC m=+0.260544505 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 2 04:40:58 localhost podman[243755]: 2025-12-02 09:40:58.218891751 +0000 UTC m=+0.289220551 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:40:58 localhost podman[243755]: unhealthy Dec 2 04:40:58 localhost python3.9[243806]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman Dec 2 04:40:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:40:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:41:00 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 2 04:41:00 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:41:00 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:41:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:00 localhost podman[243823]: 2025-12-02 09:41:00.157481809 +0000 UTC m=+0.161217022 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:41:00 localhost podman[243823]: 2025-12-02 09:41:00.166836105 +0000 UTC m=+0.170571318 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:41:00 localhost podman[243823]: unhealthy Dec 2 04:41:01 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14300 DF PROTO=TCP SPT=33926 DPT=9102 SEQ=3238031798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D21220000000001030307) Dec 2 04:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:41:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:02 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:41:02 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:41:02 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:41:02 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:41:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:02 localhost podman[243844]: 2025-12-02 09:41:02.600234083 +0000 UTC m=+0.582094039 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:41:02 localhost podman[243844]: 2025-12-02 09:41:02.639031229 +0000 UTC m=+0.620891145 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:41:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:41:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:41:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:41:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:41:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:41:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:41:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41494 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D28D10000000001030307) Dec 2 04:41:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41495 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D2CE20000000001030307) Dec 2 04:41:04 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:41:05 localhost systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully. Dec 2 04:41:05 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:41:05 localhost podman[243869]: 2025-12-02 09:41:05.231792134 +0000 UTC m=+0.264071718 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Dec 2 04:41:05 localhost podman[243869]: 2025-12-02 09:41:05.263004176 +0000 UTC m=+0.295283720 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Dec 2 04:41:05 localhost python3.9[243996]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:06 localhost nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:06 localhost nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:06 localhost nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:06 localhost nova_compute[229585]: 2025-12-02 09:41:06.040 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:41:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:06 localhost nova_compute[229585]: 2025-12-02 09:41:06.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41496 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D34E20000000001030307) Dec 2 04:41:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:41:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:07 localhost nova_compute[229585]: 2025-12-02 09:41:07.635 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:07 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:41:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:07 localhost nova_compute[229585]: 2025-12-02 09:41:07.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:07 localhost nova_compute[229585]: 2025-12-02 09:41:07.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:07 localhost nova_compute[229585]: 2025-12-02 09:41:07.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:07 localhost systemd[1]: Started libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope. Dec 2 04:41:07 localhost podman[243997]: 2025-12-02 09:41:07.693710295 +0000 UTC m=+1.887157790 container exec 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Dec 2 04:41:07 localhost podman[243997]: 2025-12-02 09:41:07.698390162 +0000 UTC m=+1.891837647 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:41:07 localhost podman[244009]: 2025-12-02 09:41:07.785616378 +0000 UTC m=+0.343880535 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git) Dec 2 04:41:07 localhost podman[244009]: 2025-12-02 09:41:07.79176055 +0000 UTC m=+0.350024697 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:41:08 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:08 localhost nova_compute[229585]: 2025-12-02 09:41:08.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:08 localhost nova_compute[229585]: 2025-12-02 09:41:08.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:41:08 localhost nova_compute[229585]: 2025-12-02 09:41:08.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:41:08 localhost nova_compute[229585]: 2025-12-02 09:41:08.686 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:41:09 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:41:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:09 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:41:10 localhost python3.9[244168]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7488 DF PROTO=TCP SPT=55254 DPT=9100 SEQ=1578231587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D43220000000001030307) Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.666 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:41:10 localhost nova_compute[229585]: 2025-12-02 09:41:10.669 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:10 localhost systemd[1]: libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope: Deactivated successfully. Dec 2 04:41:10 localhost podman[244048]: 2025-12-02 09:41:10.905356615 +0000 UTC m=+1.199968996 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:41:10 localhost systemd[1]: Started libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope. Dec 2 04:41:10 localhost podman[244169]: 2025-12-02 09:41:10.992151078 +0000 UTC m=+0.674273112 container exec 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:41:11 localhost podman[244169]: 2025-12-02 09:41:11.025788742 +0000 UTC m=+0.707910716 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:41:11 localhost podman[244048]: 2025-12-02 09:41:11.046109181 +0000 UTC m=+1.340721612 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.154 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.347 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.348 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12986MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.348 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.349 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.413 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.413 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.434 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:41:11 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.870 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.877 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.913 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.917 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:41:11 localhost nova_compute[229585]: 2025-12-02 09:41:11.917 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:41:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34857 DF PROTO=TCP SPT=37094 DPT=9105 SEQ=1296272358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D4DE20000000001030307) Dec 2 04:41:13 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 2 04:41:13 localhost systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully. Dec 2 04:41:13 localhost systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully. Dec 2 04:41:13 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:41:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:14 localhost python3.9[244363]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:14 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:41:14 localhost python3.9[244473]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman Dec 2 04:41:14 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:41:14 localhost systemd[1]: libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope: Deactivated successfully. Dec 2 04:41:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:41:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39913 DF PROTO=TCP SPT=44272 DPT=9105 SEQ=2658120026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D59220000000001030307) Dec 2 04:41:16 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:16 localhost systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully. Dec 2 04:41:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:17 localhost podman[244485]: 2025-12-02 09:41:17.135600786 +0000 UTC m=+2.144836579 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible) Dec 2 04:41:17 localhost podman[244485]: 2025-12-02 09:41:17.148445855 +0000 UTC m=+2.157681628 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:41:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4177 DF PROTO=TCP SPT=55286 DPT=9102 SEQ=3341804910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D65220000000001030307) Dec 2 04:41:19 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:19 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:19 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:19 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:41:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:21 localhost python3.9[244611]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:21 localhost systemd[1]: Started libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope. Dec 2 04:41:21 localhost podman[244612]: 2025-12-02 09:41:21.176175085 +0000 UTC m=+0.112948937 container exec 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd) Dec 2 04:41:21 localhost podman[244612]: 2025-12-02 09:41:21.18379436 +0000 UTC m=+0.120568242 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 04:41:21 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Dec 2 04:41:21 localhost systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully. Dec 2 04:41:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9688 DF PROTO=TCP SPT=54012 DPT=9101 SEQ=1647305550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D6F220000000001030307) Dec 2 04:41:21 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:21 localhost systemd[1]: libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully. Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:23 localhost python3.9[244749]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:23 localhost systemd[1]: Started libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope. Dec 2 04:41:23 localhost podman[244750]: 2025-12-02 09:41:23.199371431 +0000 UTC m=+0.104693343 container exec 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:41:23 localhost podman[244750]: 2025-12-02 09:41:23.206913303 +0000 UTC m=+0.112235205 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:24 localhost systemd[1]: libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully. Dec 2 04:41:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:26 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47236 DF PROTO=TCP SPT=46310 DPT=9101 SEQ=1154948648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D84A30000000001030307) Dec 2 04:41:27 localhost python3.9[244889]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:27 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:27 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:27 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:27 localhost python3.9[244999]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman Dec 2 04:41:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 2 04:41:28 localhost podman[239757]: time="2025-12-02T09:41:28Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Dec 2 04:41:28 localhost podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Dec 2 04:41:28 localhost systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully. Dec 2 04:41:28 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:29 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:29 localhost python3.9[245122]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:29 localhost systemd[1]: Started libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope. Dec 2 04:41:29 localhost podman[245123]: 2025-12-02 09:41:29.887650117 +0000 UTC m=+0.102315112 container exec a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:41:29 localhost podman[245123]: 2025-12-02 09:41:29.916419796 +0000 UTC m=+0.131084771 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 04:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:41:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4179 DF PROTO=TCP SPT=55286 DPT=9102 SEQ=3341804910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D95230000000001030307) Dec 2 04:41:31 localhost systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully. Dec 2 04:41:31 localhost systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully. Dec 2 04:41:31 localhost podman[245154]: 2025-12-02 09:41:31.600002983 +0000 UTC m=+0.604868403 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible) Dec 2 04:41:31 localhost podman[245154]: 2025-12-02 09:41:31.632833653 +0000 UTC m=+0.637699103 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 2 04:41:31 localhost podman[245154]: unhealthy Dec 2 04:41:32 localhost python3.9[245281]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:41:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48118 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D9E010000000001030307) Dec 2 04:41:33 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:33 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:34 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:34 localhost systemd[1]: libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully. Dec 2 04:41:34 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:41:34 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'. Dec 2 04:41:34 localhost podman[245293]: 2025-12-02 09:41:34.116199178 +0000 UTC m=+1.120192001 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:41:34 localhost podman[245293]: 2025-12-02 09:41:34.131287043 +0000 UTC m=+1.135279826 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:41:34 localhost podman[245293]: unhealthy Dec 2 04:41:34 localhost systemd[1]: Started libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope. Dec 2 04:41:34 localhost podman[245282]: 2025-12-02 09:41:34.231154622 +0000 UTC m=+1.969176591 container exec a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:41:34 localhost podman[245282]: 2025-12-02 09:41:34.264377582 +0000 UTC m=+2.002399581 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 04:41:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48119 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DA2220000000001030307) Dec 2 04:41:35 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:41:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:36 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:41:36 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'. Dec 2 04:41:36 localhost podman[245332]: 2025-12-02 09:41:36.121537945 +0000 UTC m=+0.396248602 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:41:36 localhost podman[245332]: 2025-12-02 09:41:36.157574869 +0000 UTC m=+0.432285906 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:41:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48120 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DAA220000000001030307) Dec 2 04:41:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:37 localhost systemd[1]: libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully. Dec 2 04:41:37 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:41:37 localhost python3.9[245466]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:41:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:37 localhost podman[245576]: 2025-12-02 09:41:37.789145301 +0000 UTC m=+0.079763157 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:41:37 localhost podman[245576]: 2025-12-02 09:41:37.795847128 +0000 UTC m=+0.086465044 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:41:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:37 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:41:37 localhost python3.9[245577]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman Dec 2 04:41:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:41:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45300 DF PROTO=TCP SPT=43638 DPT=9100 SEQ=2182860206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DB7230000000001030307) Dec 2 04:41:40 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:40 localhost systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully. Dec 2 04:41:41 localhost systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully. Dec 2 04:41:41 localhost podman[245607]: 2025-12-02 09:41:41.118862067 +0000 UTC m=+1.121051035 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:41:41 localhost podman[245607]: 2025-12-02 09:41:41.153424258 +0000 UTC m=+1.155613216 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 04:41:41 localhost python3.9[245735]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15392 DF PROTO=TCP SPT=48506 DPT=9105 SEQ=596598299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DC3220000000001030307) Dec 2 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:43 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:41:43 localhost podman[245783]: 2025-12-02 09:41:43.884867398 +0000 UTC m=+0.392793611 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:41:43 localhost podman[245783]: 2025-12-02 09:41:43.889824074 +0000 UTC m=+0.397750277 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:41:43 localhost systemd[1]: Started libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope. Dec 2 04:41:43 localhost podman[245736]: 2025-12-02 09:41:43.966481958 +0000 UTC m=+2.218340469 container exec 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:41:43 localhost podman[245736]: 2025-12-02 09:41:43.999039079 +0000 UTC m=+2.250897590 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 2 04:41:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54946 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DCE5F0000000001030307) Dec 2 04:41:46 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:41:46 localhost python3.9[245963]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 2 04:41:47 localhost systemd[1]: libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully. Dec 2 04:41:47 localhost systemd[1]: Started libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope. Dec 2 04:41:47 localhost podman[245964]: 2025-12-02 09:41:47.242331554 +0000 UTC m=+0.560171883 container exec 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:41:47 localhost podman[245964]: 2025-12-02 09:41:47.27096948 +0000 UTC m=+0.588809819 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:41:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 2 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 2 04:41:48 localhost systemd[1]: libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully. Dec 2 04:41:48 localhost python3.9[246120]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54948 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DDA620000000001030307) Dec 2 04:41:49 localhost python3.9[246230]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman Dec 2 04:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 2 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully. Dec 2 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully. Dec 2 04:41:50 localhost podman[239757]: @ - - [02/Dec/2025:09:36:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140643 "" "Go-http-client/1.1" Dec 2 04:41:50 localhost podman_exporter[240012]: ts=2025-12-02T09:41:50.825Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Dec 2 04:41:50 localhost podman_exporter[240012]: ts=2025-12-02T09:41:50.826Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Dec 2 04:41:50 localhost podman_exporter[240012]: ts=2025-12-02T09:41:50.826Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Dec 2 04:41:50 localhost podman[246244]: 2025-12-02 09:41:50.858756108 +0000 UTC m=+0.858314296 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:41:50 localhost podman[246244]: 2025-12-02 09:41:50.896259426 +0000 UTC m=+0.895817604 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Dec 2 04:41:50 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:41:51 localhost python3.9[246370]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:51 localhost systemd[1]: Started libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope. Dec 2 04:41:51 localhost podman[246371]: 2025-12-02 09:41:51.652660543 +0000 UTC m=+0.112323579 container exec 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:41:51 localhost podman[246371]: 2025-12-02 09:41:51.692603752 +0000 UTC m=+0.152266788 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:41:51 localhost systemd[1]: libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully. Dec 2 04:41:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47238 DF PROTO=TCP SPT=46310 DPT=9101 SEQ=1154948648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DE5230000000001030307) Dec 2 04:41:52 localhost python3.9[246512]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:52 localhost systemd[1]: Started libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope. Dec 2 04:41:52 localhost podman[246513]: 2025-12-02 09:41:52.511268628 +0000 UTC m=+0.084250349 container exec 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:41:52 localhost podman[246513]: 2025-12-02 09:41:52.54383678 +0000 UTC m=+0.116818431 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:41:52 localhost systemd[1]: libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully. Dec 2 04:41:53 localhost python3.9[246652]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:53 localhost python3.9[246762]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman Dec 2 04:41:54 localhost python3.9[246885]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:54 localhost systemd[1]: Started libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope. Dec 2 04:41:54 localhost podman[246886]: 2025-12-02 09:41:54.615705232 +0000 UTC m=+0.100448997 container exec bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 2 04:41:54 localhost podman[246886]: 2025-12-02 09:41:54.645208134 +0000 UTC m=+0.129951899 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public) Dec 2 04:41:54 localhost systemd[1]: libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully. Dec 2 04:41:55 localhost python3.9[247026]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Dec 2 04:41:55 localhost systemd[1]: Started libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope. Dec 2 04:41:55 localhost podman[247027]: 2025-12-02 09:41:55.396062396 +0000 UTC m=+0.085189507 container exec bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Dec 2 04:41:55 localhost podman[247027]: 2025-12-02 09:41:55.424227058 +0000 UTC m=+0.113354159 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:41:55 localhost systemd[1]: libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully. Dec 2 04:41:56 localhost python3.9[247167]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:56 localhost python3.9[247277]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51294 DF PROTO=TCP SPT=54436 DPT=9101 SEQ=3533408136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DF9E20000000001030307) Dec 2 04:41:57 localhost python3.9[247387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:41:58 localhost python3.9[247475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668517.2658207-3068-175235715447587/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:59 localhost python3.9[247585]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:41:59 localhost python3.9[247695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:00 localhost python3.9[247752]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:00 localhost python3.9[247862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:01 localhost python3.9[247919]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tze3l3lg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54950 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E0B230000000001030307) Dec 2 04:42:02 localhost python3.9[248029]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:02 localhost python3.9[248086]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:42:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:42:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:42:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:42:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:42:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:42:03 localhost python3.9[248196]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:42:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62069 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E13310000000001030307) Dec 2 04:42:04 localhost python3[248307]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:42:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62070 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E17220000000001030307) Dec 2 04:42:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:42:04 localhost podman[248418]: 2025-12-02 09:42:04.794868586 +0000 UTC m=+0.081373844 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:42:04 localhost podman[248418]: 2025-12-02 09:42:04.813822736 +0000 UTC m=+0.100327984 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2) Dec 2 04:42:04 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:42:04 localhost python3.9[248417]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:05 localhost python3.9[248494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:06 localhost python3.9[248604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:42:06 localhost podman[248662]: 2025-12-02 09:42:06.366047973 +0000 UTC m=+0.073705658 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:42:06 localhost podman[248662]: 2025-12-02 09:42:06.374376339 +0000 UTC m=+0.082034024 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:42:06 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:42:06 localhost sshd[248685]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:42:06 localhost python3.9[248661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62071 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E1F220000000001030307) Dec 2 04:42:06 localhost nova_compute[229585]: 2025-12-02 09:42:06.918 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:06 localhost nova_compute[229585]: 2025-12-02 09:42:06.919 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:06 localhost nova_compute[229585]: 2025-12-02 09:42:06.919 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:42:07 localhost podman[248796]: 2025-12-02 09:42:07.32122737 +0000 UTC m=+0.081525649 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 04:42:07 localhost podman[248796]: 2025-12-02 09:42:07.439119931 +0000 UTC m=+0.199418240 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:42:07 localhost python3.9[248797]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:07 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:42:07 localhost nova_compute[229585]: 2025-12-02 09:42:07.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:07 localhost python3.9[248876]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:42:08 localhost podman[248894]: 2025-12-02 09:42:08.077151812 +0000 UTC m=+0.074437109 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:42:08 localhost podman[248894]: 2025-12-02 09:42:08.086927521 +0000 UTC m=+0.084212818 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:42:08 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:42:08 localhost auditd[726]: Audit daemon rotating log files Dec 2 04:42:08 localhost python3.9[249002]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:08 localhost nova_compute[229585]: 2025-12-02 09:42:08.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:08 localhost nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:08 localhost nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:08 localhost nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:42:09 localhost python3.9[249059]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:09 localhost python3.9[249169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:42:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=369 DF PROTO=TCP SPT=58560 DPT=9100 SEQ=3669563591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E2D220000000001030307) Dec 2 04:42:10 localhost python3.9[249259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764668529.1991947-3443-40348481829427/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:55 localhost python3.9[254937]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:42:55 localhost rsyslogd[759]: imjournal: 342 messages lost due to rate-limiting (20000 allowed within 600 seconds) Dec 2 04:42:55 localhost python3.9[255046]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668575.2542298-1000-87350527762936/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:42:56 localhost python3.9[255101]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:42:56 localhost systemd[1]: Reloading. Dec 2 04:42:56 localhost systemd-rc-local-generator[255125]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:42:56 localhost systemd-sysv-generator[255131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:57 localhost python3.9[255191]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:42:58 localhost systemd[1]: Reloading. Dec 2 04:42:58 localhost systemd-rc-local-generator[255221]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:42:58 localhost systemd-sysv-generator[255224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:42:58 localhost systemd[1]: Starting neutron_sriov_agent container... Dec 2 04:42:58 localhost systemd[1]: tmp-crun.OhfIYa.mount: Deactivated successfully. Dec 2 04:42:58 localhost systemd[1]: Started libcrun container. Dec 2 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:42:58 localhost podman[255232]: 2025-12-02 09:42:58.86666936 +0000 UTC m=+0.129807070 container init 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:42:58 localhost podman[255232]: 2025-12-02 09:42:58.878375617 +0000 UTC m=+0.141513327 container start 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent) Dec 2 04:42:58 localhost podman[255232]: neutron_sriov_agent Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + sudo -E kolla_set_configs Dec 2 04:42:58 localhost systemd[1]: Started neutron_sriov_agent container. Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Validating config file Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Copying service configuration files Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Writing out command to execute Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: ++ cat /run_command Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + CMD=/usr/bin/neutron-sriov-nic-agent Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + ARGS= Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + sudo kolla_copy_cacerts Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + [[ ! -n '' ]] Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + . kolla_extend_start Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: Running command: '/usr/bin/neutron-sriov-nic-agent' Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + umask 0022 Dec 2 04:42:58 localhost neutron_sriov_agent[255247]: + exec /usr/bin/neutron-sriov-nic-agent Dec 2 04:42:59 localhost python3.9[255370]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:42:59 localhost systemd[1]: Stopping neutron_sriov_agent container... Dec 2 04:42:59 localhost systemd[1]: libpod-41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e.scope: Deactivated successfully. Dec 2 04:42:59 localhost podman[255375]: 2025-12-02 09:42:59.834508962 +0000 UTC m=+0.075023779 container died 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e-userdata-shm.mount: Deactivated successfully. Dec 2 04:42:59 localhost podman[255375]: 2025-12-02 09:42:59.928282481 +0000 UTC m=+0.168797248 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:42:59 localhost podman[255375]: neutron_sriov_agent Dec 2 04:42:59 localhost podman[255386]: 2025-12-02 09:42:59.930367164 +0000 UTC m=+0.092352107 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=neutron_sriov_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:43:00 localhost podman[255399]: 2025-12-02 09:43:00.009005702 +0000 UTC m=+0.046277922 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:43:00 localhost podman[255399]: neutron_sriov_agent Dec 2 04:43:00 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Dec 2 04:43:00 localhost systemd[1]: Stopped neutron_sriov_agent container. Dec 2 04:43:00 localhost systemd[1]: Starting neutron_sriov_agent container... Dec 2 04:43:00 localhost systemd[1]: Started libcrun container. Dec 2 04:43:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:00 localhost podman[255412]: 2025-12-02 09:43:00.144509174 +0000 UTC m=+0.103089894 container init 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:43:00 localhost podman[255412]: 2025-12-02 09:43:00.150690333 +0000 UTC m=+0.109271043 container start 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:43:00 localhost podman[255412]: neutron_sriov_agent Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + sudo -E kolla_set_configs Dec 2 04:43:00 localhost systemd[1]: Started neutron_sriov_agent container. Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Validating config file Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Copying service configuration files Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Writing out command to execute Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: ++ cat /run_command Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + CMD=/usr/bin/neutron-sriov-nic-agent Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + ARGS= Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + sudo kolla_copy_cacerts Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: Running command: '/usr/bin/neutron-sriov-nic-agent' Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + [[ ! -n '' ]] Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + . kolla_extend_start Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + umask 0022 Dec 2 04:43:00 localhost neutron_sriov_agent[255428]: + exec /usr/bin/neutron-sriov-nic-agent Dec 2 04:43:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38355 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EF5230000000001030307) Dec 2 04:43:01 localhost systemd[1]: session-57.scope: Deactivated successfully. Dec 2 04:43:01 localhost systemd[1]: session-57.scope: Consumed 22.873s CPU time. Dec 2 04:43:01 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Dec 2 04:43:01 localhost systemd-logind[760]: Removed session 57. Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.085 2 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.085 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.086 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005541914.localdomain'}#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.088 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] RPC agent_id: nic-switch-agent.np0005541914.localdomain#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.096 2 INFO neutron.agent.agent_extensions_manager [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Loaded agent extensions: ['qos']#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.097 2 INFO neutron.agent.agent_extensions_manager [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Initializing agent extension 'qos'#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.366 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Agent initialized successfully, now running... #033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.367 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Dec 2 04:43:02 localhost neutron_sriov_agent[255428]: 2025-12-02 09:43:02.368 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Agent out of sync with plugin!#033[00m Dec 2 04:43:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:43:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:43:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:43:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:43:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:43:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:43:03 localhost podman[239757]: time="2025-12-02T09:43:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:43:03 localhost podman[239757]: @ - - [02/Dec/2025:09:43:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144244 "" "Go-http-client/1.1" Dec 2 04:43:03 localhost podman[239757]: @ - - [02/Dec/2025:09:43:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16292 "" "Go-http-client/1.1" Dec 2 04:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:43:06 localhost podman[255461]: 2025-12-02 09:43:06.077558177 +0000 UTC m=+0.085148718 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:43:06 localhost podman[255461]: 2025-12-02 09:43:06.114184464 +0000 UTC m=+0.121775025 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:43:06 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:43:06 localhost sshd[255482]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:43:06 localhost systemd-logind[760]: New session 58 of user zuul. Dec 2 04:43:06 localhost systemd[1]: Started Session 58 of User zuul. Dec 2 04:43:07 localhost systemd[1]: tmp-crun.YneE5y.mount: Deactivated successfully. Dec 2 04:43:07 localhost podman[255485]: 2025-12-02 09:43:07.01462531 +0000 UTC m=+0.077183863 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:43:07 localhost podman[255485]: 2025-12-02 09:43:07.022869262 +0000 UTC m=+0.085427895 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:43:07 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:43:07 localhost python3.9[255616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:43:07 localhost nova_compute[229585]: 2025-12-02 09:43:07.890 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:43:08 localhost systemd[1]: tmp-crun.IrvXra.mount: Deactivated successfully. Dec 2 04:43:08 localhost podman[255621]: 2025-12-02 09:43:08.076748657 +0000 UTC m=+0.081170496 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:43:08 localhost podman[255621]: 2025-12-02 09:43:08.111001461 +0000 UTC m=+0.115423290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller) Dec 2 04:43:08 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:43:08 localhost podman[255662]: 2025-12-02 09:43:08.571771502 +0000 UTC m=+0.066653044 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:43:08 localhost podman[255662]: 2025-12-02 09:43:08.58187454 +0000 UTC m=+0.076756042 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 04:43:08 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:43:08 localhost nova_compute[229585]: 2025-12-02 09:43:08.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:08 localhost nova_compute[229585]: 2025-12-02 09:43:08.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:08 localhost nova_compute[229585]: 2025-12-02 09:43:08.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:08 localhost nova_compute[229585]: 2025-12-02 09:43:08.663 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:43:09 localhost python3.9[255773]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:43:09 localhost nova_compute[229585]: 2025-12-02 09:43:09.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:09 localhost nova_compute[229585]: 2025-12-02 09:43:09.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:10 localhost python3.9[255836]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:43:10 localhost nova_compute[229585]: 2025-12-02 09:43:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:10 localhost nova_compute[229585]: 2025-12-02 09:43:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:11 localhost nova_compute[229585]: 2025-12-02 09:43:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:11 localhost nova_compute[229585]: 2025-12-02 09:43:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:43:11 localhost nova_compute[229585]: 2025-12-02 09:43:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:43:11 localhost nova_compute[229585]: 2025-12-02 09:43:11.658 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:43:12 localhost openstack_network_exporter[241816]: ERROR 09:43:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:43:12 localhost openstack_network_exporter[241816]: ERROR 09:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:12 localhost openstack_network_exporter[241816]: ERROR 09:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:12 localhost openstack_network_exporter[241816]: ERROR 09:43:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:43:12 localhost openstack_network_exporter[241816]: Dec 2 04:43:12 localhost openstack_network_exporter[241816]: ERROR 09:43:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:43:12 localhost openstack_network_exporter[241816]: Dec 2 04:43:12 localhost sshd[255839]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.658 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:43:14 localhost nova_compute[229585]: 2025-12-02 09:43:14.660 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:43:14 localhost python3.9[255950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:43:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:43:14 localhost systemd[1]: tmp-crun.8Cb9rO.mount: Deactivated successfully. Dec 2 04:43:14 localhost podman[255953]: 2025-12-02 09:43:14.827961657 +0000 UTC m=+0.109085827 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container) Dec 2 04:43:14 localhost podman[255953]: 2025-12-02 09:43:14.845815522 +0000 UTC m=+0.126939742 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 04:43:14 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.121 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.313 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.314 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12965MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.315 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.315 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.375 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.378 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.396 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.819 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.826 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.844 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.846 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:43:15 localhost nova_compute[229585]: 2025-12-02 09:43:15.846 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:43:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47933 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F2DEF0000000001030307) Dec 2 04:43:16 localhost python3.9[256125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47934 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F31E20000000001030307) Dec 2 04:43:17 localhost systemd[1]: tmp-crun.iRpOCQ.mount: Deactivated successfully. Dec 2 04:43:17 localhost podman[256235]: 2025-12-02 09:43:17.074698886 +0000 UTC m=+0.096988858 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:43:17 localhost podman[256235]: 2025-12-02 09:43:17.08989767 +0000 UTC m=+0.112187582 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:43:17 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:43:17 localhost python3.9[256236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:17 localhost python3.9[256368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38356 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F35220000000001030307) Dec 2 04:43:18 localhost python3.9[256478]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:18 localhost python3.9[256588]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47935 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F39E20000000001030307) Dec 2 04:43:19 localhost python3.9[256698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59337 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F3D220000000001030307) Dec 2 04:43:20 localhost python3.9[256808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:21 localhost python3.9[256918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:21 localhost python3.9[257006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668600.6121984-281-157018551260081/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:22 localhost python3.9[257114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:22 localhost python3.9[257200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668602.0612595-326-87680102253525/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47936 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F49A20000000001030307) Dec 2 04:43:23 localhost podman[257201]: 2025-12-02 09:43:23.104857899 +0000 UTC m=+0.086420526 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS) Dec 2 04:43:23 localhost podman[257201]: 2025-12-02 09:43:23.123163617 +0000 UTC m=+0.104726274 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 04:43:23 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:43:23 localhost python3.9[257327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:24 localhost python3.9[257413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668603.1174016-326-270681360036164/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:24 localhost python3.9[257521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:25 localhost python3.9[257607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668604.2101135-326-216242466810126/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=f3a803fb4781ff2c03993a2db54cc2ba6fd7b97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:26 localhost python3.9[257715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:26 localhost python3.9[257801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668605.9944148-499-142329062657303/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=d6e803f833d8b5f768d3a3c0112defa742aeec55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:27 localhost python3.9[257909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:28 localhost python3.9[257995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668607.1491325-544-152874252841788/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:28 localhost python3.9[258103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:29 localhost python3.9[258189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668608.281913-544-34264019356847/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:29 localhost python3.9[258297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:30 localhost python3.9[258352]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:30 localhost python3.9[258460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47937 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F69230000000001030307) Dec 2 04:43:31 localhost python3.9[258546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668610.4371908-631-214664646599854/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:31 localhost python3.9[258654]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:32 localhost python3.9[258766]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:33 localhost python3.9[258876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:33 localhost podman[239757]: time="2025-12-02T09:43:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:43:33 localhost podman[239757]: @ - - [02/Dec/2025:09:43:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144244 "" "Go-http-client/1.1" Dec 2 04:43:33 localhost podman[239757]: @ - - [02/Dec/2025:09:43:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16310 "" "Go-http-client/1.1" Dec 2 04:43:33 localhost python3.9[258933]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:34 localhost python3.9[259046]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:34 localhost python3.9[259103]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:35 localhost python3.9[259213]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:36 localhost python3.9[259323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:43:36 localhost podman[259380]: 2025-12-02 09:43:36.378364149 +0000 UTC m=+0.082281699 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:43:36 localhost podman[259380]: 2025-12-02 09:43:36.392837791 +0000 UTC m=+0.096755331 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Dec 2 04:43:36 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:43:36 localhost python3.9[259381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:37 localhost python3.9[259510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:43:37 localhost systemd[1]: tmp-crun.TxbK8f.mount: Deactivated successfully. Dec 2 04:43:37 localhost podman[259568]: 2025-12-02 09:43:37.482411384 +0000 UTC m=+0.099687471 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:43:37 localhost podman[259568]: 2025-12-02 09:43:37.493819342 +0000 UTC m=+0.111095449 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:43:37 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:43:37 localhost python3.9[259567]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:38 localhost python3.9[259699]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:43:38 localhost systemd[1]: Reloading. Dec 2 04:43:38 localhost systemd-rc-local-generator[259734]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:38 localhost systemd-sysv-generator[259738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:38 localhost podman[259701]: 2025-12-02 09:43:38.466272885 +0000 UTC m=+0.121123135 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:43:38 localhost podman[259701]: 2025-12-02 09:43:38.499647052 +0000 UTC m=+0.154497282 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:43:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:43:38 localhost podman[259760]: 2025-12-02 09:43:38.799402253 +0000 UTC m=+0.076452592 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 04:43:38 localhost podman[259760]: 2025-12-02 09:43:38.805592571 +0000 UTC m=+0.082642930 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:43:38 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:43:39 localhost python3.9[259887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:39 localhost python3.9[259944]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:40 localhost python3.9[260054]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:40 localhost python3.9[260111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:41 localhost python3.9[260221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:41 localhost systemd[1]: Reloading. Dec 2 04:43:41 localhost systemd-rc-local-generator[260247]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:41 localhost systemd-sysv-generator[260251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:42 localhost openstack_network_exporter[241816]: ERROR 09:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:42 localhost openstack_network_exporter[241816]: ERROR 09:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:42 localhost openstack_network_exporter[241816]: ERROR 09:43:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:43:42 localhost openstack_network_exporter[241816]: ERROR 09:43:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:43:42 localhost openstack_network_exporter[241816]: Dec 2 04:43:42 localhost openstack_network_exporter[241816]: ERROR 09:43:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:43:42 localhost openstack_network_exporter[241816]: Dec 2 04:43:42 localhost systemd[1]: Starting Create netns directory... Dec 2 04:43:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:43:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:43:42 localhost systemd[1]: Finished Create netns directory. Dec 2 04:43:43 localhost python3.9[260373]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:43 localhost python3.9[260483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:44 localhost python3.9[260571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668623.3722298-1075-212584866539915/.source.json _original_basename=.ffpbcxcs follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:43:45 localhost podman[260682]: 2025-12-02 09:43:45.069149552 +0000 UTC m=+0.081658601 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal) Dec 2 04:43:45 localhost podman[260682]: 2025-12-02 09:43:45.087294576 +0000 UTC m=+0.099803615 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:43:45 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:43:45 localhost python3.9[260681]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47914 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA31E0000000001030307) Dec 2 04:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47915 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA7220000000001030307) Dec 2 04:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:43:47 localhost podman[261010]: 2025-12-02 09:43:47.292060135 +0000 UTC m=+0.083699654 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:43:47 localhost podman[261010]: 2025-12-02 09:43:47.303781542 +0000 UTC m=+0.095421061 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:43:47 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:43:47 localhost python3.9[261009]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Dec 2 04:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47938 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA9220000000001030307) Dec 2 04:43:48 localhost python3.9[261142]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:43:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47916 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FAF230000000001030307) Dec 2 04:43:49 localhost python3.9[261252]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:43:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38357 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FB3230000000001030307) Dec 2 04:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47917 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FBEE30000000001030307) Dec 2 04:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:43:53 localhost systemd[1]: tmp-crun.sdY4Jc.mount: Deactivated successfully. Dec 2 04:43:53 localhost podman[261474]: 2025-12-02 09:43:53.580916517 +0000 UTC m=+0.095637228 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd) Dec 2 04:43:53 localhost podman[261474]: 2025-12-02 09:43:53.621856805 +0000 UTC m=+0.136577486 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:43:53 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:43:53 localhost python3[261475]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:43:54 localhost podman[261531]: Dec 2 04:43:54 localhost podman[261531]: 2025-12-02 09:43:54.056029423 +0000 UTC m=+0.091349916 container create 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:43:54 localhost podman[261531]: 2025-12-02 09:43:54.008329109 +0000 UTC m=+0.043649592 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:43:54 localhost python3[261475]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:43:54 localhost python3.9[261678]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:55 localhost python3.9[261790]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:55 localhost python3.9[261845]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:56 localhost python3.9[261954]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668636.0255766-1339-31479654538936/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:57 localhost python3.9[262009]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:43:57 localhost systemd[1]: Reloading. Dec 2 04:43:57 localhost systemd-rc-local-generator[262034]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:57 localhost systemd-sysv-generator[262038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost python3.9[262099]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:58 localhost systemd[1]: Reloading. Dec 2 04:43:58 localhost systemd-sysv-generator[262129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:58 localhost systemd-rc-local-generator[262124]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 2 04:43:58 localhost systemd[1]: Started libcrun container. Dec 2 04:43:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:58 localhost podman[262140]: 2025-12-02 09:43:58.579242216 +0000 UTC m=+0.122271090 container init 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=neutron_dhcp_agent) Dec 2 04:43:58 localhost podman[262140]: 2025-12-02 09:43:58.58689775 +0000 UTC m=+0.129926634 container start 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 04:43:58 localhost podman[262140]: neutron_dhcp_agent Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + sudo -E kolla_set_configs Dec 2 04:43:58 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Validating config file Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Copying service configuration files Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Writing out command to execute Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: ++ cat /run_command Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + CMD=/usr/bin/neutron-dhcp-agent Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + ARGS= Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + sudo kolla_copy_cacerts Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + [[ ! -n '' ]] Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + . kolla_extend_start Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + umask 0022 Dec 2 04:43:58 localhost neutron_dhcp_agent[262156]: + exec /usr/bin/neutron-dhcp-agent Dec 2 04:43:59 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:43:59.898 262160 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:43:59 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:43:59.898 262160 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Dec 2 04:44:00 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.278 262160 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 04:44:00 localhost python3.9[262281]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:44:00 localhost systemd[1]: Stopping neutron_dhcp_agent container... Dec 2 04:44:00 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.751 262160 INFO neutron.agent.dhcp.agent [None req-9550d0b2-9c7b-4d81-8465-e009d7c59926 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 04:44:00 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.752 262160 INFO neutron.agent.dhcp.agent [-] Starting network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration#033[00m Dec 2 04:44:00 localhost neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.803 262160 INFO neutron.agent.dhcp.agent [-] Starting network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration#033[00m Dec 2 04:44:01 localhost systemd[1]: libpod-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a.scope: Deactivated successfully. Dec 2 04:44:01 localhost systemd[1]: libpod-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a.scope: Consumed 2.082s CPU time. Dec 2 04:44:01 localhost podman[262285]: 2025-12-02 09:44:01.132215542 +0000 UTC m=+0.400933717 container died 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a-userdata-shm.mount: Deactivated successfully. Dec 2 04:44:01 localhost podman[262285]: 2025-12-02 09:44:01.231088286 +0000 UTC m=+0.499806481 container cleanup 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:44:01 localhost podman[262285]: neutron_dhcp_agent Dec 2 04:44:01 localhost podman[262326]: error opening file `/run/crun/6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a/status`: No such file or directory Dec 2 04:44:01 localhost podman[262315]: 2025-12-02 09:44:01.33385756 +0000 UTC m=+0.067561421 container cleanup 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:44:01 localhost podman[262315]: neutron_dhcp_agent Dec 2 04:44:01 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Dec 2 04:44:01 localhost systemd[1]: Stopped neutron_dhcp_agent container. Dec 2 04:44:01 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 2 04:44:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47918 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FDF220000000001030307) Dec 2 04:44:01 localhost systemd[1]: Started libcrun container. Dec 2 04:44:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:01 localhost podman[262328]: 2025-12-02 09:44:01.474423116 +0000 UTC m=+0.111362276 container init 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:01 localhost podman[262328]: 2025-12-02 09:44:01.482763811 +0000 UTC m=+0.119703001 container start 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:01 localhost podman[262328]: neutron_dhcp_agent Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + sudo -E kolla_set_configs Dec 2 04:44:01 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 2 04:44:01 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:01.521 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:44:01 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:01.522 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 04:44:01 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:01.523 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Validating config file Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Copying service configuration files Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Writing out command to execute Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04 Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8 Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: ++ cat /run_command Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + CMD=/usr/bin/neutron-dhcp-agent Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + ARGS= Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + sudo kolla_copy_cacerts Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + [[ ! -n '' ]] Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + . kolla_extend_start Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + umask 0022 Dec 2 04:44:01 localhost neutron_dhcp_agent[262343]: + exec /usr/bin/neutron-dhcp-agent Dec 2 04:44:02 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:02.794 262347 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:44:02 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:02.794 262347 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Dec 2 04:44:02 localhost systemd-logind[760]: Session 58 logged out. Waiting for processes to exit. Dec 2 04:44:02 localhost systemd[1]: session-58.scope: Deactivated successfully. Dec 2 04:44:02 localhost systemd[1]: session-58.scope: Consumed 34.343s CPU time. Dec 2 04:44:02 localhost systemd-logind[760]: Removed session 58. Dec 2 04:44:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.175 262347 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.347 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.348 262347 INFO neutron.agent.dhcp.agent [-] Starting network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.397 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpbvcfomb8/privsep.sock']#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.398 262347 INFO neutron.agent.dhcp.agent [-] Starting network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration#033[00m Dec 2 04:44:03 localhost podman[239757]: time="2025-12-02T09:44:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:44:03 localhost podman[239757]: @ - - [02/Dec/2025:09:44:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146549 "" "Go-http-client/1.1" Dec 2 04:44:03 localhost podman[239757]: @ - - [02/Dec/2025:09:44:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16746 "" "Go-http-client/1.1" Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.975 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.888 262380 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.893 262380 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.896 262380 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.897 262380 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262380#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.979 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Dec 2 04:44:04 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.491 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpo7j0x9to/privsep.sock']#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.080 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.987 262390 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.992 262390 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.996 262390 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.996 262390 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262390#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.084 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Dec 2 04:44:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.980 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp59hk0oy_/privsep.sock']#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.609 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.512 262406 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.516 262406 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.518 262406 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.518 262406 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262406#033[00m Dec 2 04:44:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.612 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running#033[00m Dec 2 04:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:44:07 localhost systemd[1]: tmp-crun.PUrL6p.mount: Deactivated successfully. Dec 2 04:44:07 localhost podman[262412]: 2025-12-02 09:44:07.073389953 +0000 UTC m=+0.074741881 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 2 04:44:07 localhost podman[262412]: 2025-12-02 09:44:07.084810721 +0000 UTC m=+0.086162639 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:44:07 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:44:07 localhost nova_compute[229585]: 2025-12-02 09:44:07.847 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:07 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:07.968 262347 INFO neutron.agent.linux.ip_lib [-] Device tap51dc7089-37 cannot be used as it has no MAC address#033[00m Dec 2 04:44:07 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:07.969 262347 INFO neutron.agent.linux.ip_lib [-] Device tap71143481-6b cannot be used as it has no MAC address#033[00m Dec 2 04:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:44:08 localhost kernel: device tap51dc7089-37 entered promiscuous mode Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00025|binding|INFO|Claiming lport 51dc7089-37a2-48fc-93b9-4ba936552f69 for this chassis. Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00026|binding|INFO|51dc7089-37a2-48fc-93b9-4ba936552f69: Claiming unknown Dec 2 04:44:08 localhost NetworkManager[5967]: [1764668648.0806] manager: (tap51dc7089-37): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Dec 2 04:44:08 localhost podman[262447]: 2025-12-02 09:44:08.121936115 +0000 UTC m=+0.093544453 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.129 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-447a69ac-5cfc-4dee-8482-764b4cafdf04', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-447a69ac-5cfc-4dee-8482-764b4cafdf04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf41aa1c-eb45-46e3-a272-be0018b06eb4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=51dc7089-37a2-48fc-93b9-4ba936552f69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.131 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 51dc7089-37a2-48fc-93b9-4ba936552f69 in datapath 447a69ac-5cfc-4dee-8482-764b4cafdf04 bound to our chassis#033[00m Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00027|ovn_bfd|INFO|Enabled BFD on interface ovn-4d166c-0 Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-2587fe-0 Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-be95dc-0 Dec 2 04:44:08 localhost podman[262447]: 2025-12-02 09:44:08.136808509 +0000 UTC m=+0.108416847 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.136 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port e6bceff3-4869-485b-b4ce-6bba322f358c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.136 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 447a69ac-5cfc-4dee-8482-764b4cafdf04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.137 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpny958rap/privsep.sock']#033[00m Dec 2 04:44:08 localhost kernel: device tap71143481-6b entered promiscuous mode Dec 2 04:44:08 localhost NetworkManager[5967]: [1764668648.1469] manager: (tap71143481-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Dec 2 04:44:08 localhost systemd-udevd[262460]: Network interface NamePolicy= disabled on kernel command line. Dec 2 04:44:08 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00030|binding|INFO|Claiming lport 71143481-6bca-4043-aaee-4555f1b73e03 for this chassis. Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00031|binding|INFO|71143481-6bca-4043-aaee-4555f1b73e03: Claiming unknown Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00032|binding|INFO|Setting lport 51dc7089-37a2-48fc-93b9-4ba936552f69 ovn-installed in OVS Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00033|binding|INFO|Setting lport 51dc7089-37a2-48fc-93b9-4ba936552f69 up in Southbound Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.169 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=71143481-6bca-4043-aaee-4555f1b73e03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:44:08 localhost journal[229262]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 2 04:44:08 localhost journal[229262]: hostname: np0005541914.localdomain Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00034|binding|INFO|Setting lport 71143481-6bca-4043-aaee-4555f1b73e03 ovn-installed in OVS Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost ovn_controller[153778]: 2025-12-02T09:44:08Z|00035|binding|INFO|Setting lport 71143481-6bca-4043-aaee-4555f1b73e03 up in Southbound Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost journal[229262]: ethtool ioctl error on tap71143481-6b: No such device Dec 2 04:44:08 localhost nova_compute[229585]: 2025-12-02 09:44:08.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.743 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.744 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpny958rap/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.620 262550 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.627 262550 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.630 262550 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.631 262550 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262550#033[00m Dec 2 04:44:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:08.747 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[572c9a2a-1ade-4d2f-b537-30273a0dade6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:44:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:44:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:44:09 localhost podman[262556]: 2025-12-02 09:44:09.122789574 +0000 UTC m=+0.118777953 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:44:09 localhost podman[262556]: 2025-12-02 09:44:09.168554479 +0000 UTC m=+0.164542908 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2) Dec 2 04:44:09 localhost podman[262555]: 2025-12-02 09:44:09.178307856 +0000 UTC m=+0.173758849 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:44:09 localhost podman[262555]: 2025-12-02 09:44:09.186984731 +0000 UTC m=+0.182435744 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 04:44:09 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:44:09 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.374 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cfed8121-9ccf-471c-8dbf-08f97dc9d29d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.375 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 71143481-6bca-4043-aaee-4555f1b73e03 in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 unbound from our chassis#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.376 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 91c85177-b7d9-4980-b09c-b22e92e8c189 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.376 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 595e1c9b-709c-41d2-9212-0b18b13291a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 04:44:09 localhost ovn_metadata_agent[159477]: 2025-12-02 09:44:09.377 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a44c8467-29fe-46c0-b655-befa8e53ee02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:44:09 localhost podman[262637]: Dec 2 04:44:09 localhost podman[262637]: 2025-12-02 09:44:09.582308986 +0000 UTC m=+0.080208447 container create 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:44:09 localhost systemd[1]: Started libpod-conmon-69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8.scope. Dec 2 04:44:09 localhost systemd[1]: Started libcrun container. Dec 2 04:44:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c6b94ed714f6eb0d65e561b3bb1abfb0a8d5b609cce5e65500be00bd7ad6a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:09 localhost podman[262657]: Dec 2 04:44:09 localhost podman[262637]: 2025-12-02 09:44:09.639774528 +0000 UTC m=+0.137673989 container init 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:09 localhost nova_compute[229585]: 2025-12-02 09:44:09.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:09 localhost podman[262637]: 2025-12-02 09:44:09.548821105 +0000 UTC m=+0.046720616 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:44:09 localhost podman[262637]: 2025-12-02 09:44:09.649505415 +0000 UTC m=+0.147404866 container start 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:44:09 localhost dnsmasq[262677]: started, version 2.85 cachesize 150 Dec 2 04:44:09 localhost dnsmasq[262677]: DNS service limited to local subnets Dec 2 04:44:09 localhost dnsmasq[262677]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 04:44:09 localhost dnsmasq[262677]: warning: no upstream servers configured Dec 2 04:44:09 localhost dnsmasq-dhcp[262677]: DHCP, static leases only on 192.168.122.0, lease time 1d Dec 2 04:44:09 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 04:44:09 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 04:44:09 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 04:44:09 localhost podman[262657]: 2025-12-02 09:44:09.601822821 +0000 UTC m=+0.055505314 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:44:09 localhost podman[262657]: 2025-12-02 09:44:09.701983814 +0000 UTC m=+0.155666277 container create eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:44:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.703 262347 INFO neutron.agent.dhcp.agent [None req-1170d6cb-19dd-407d-a976-0819479f745d - - - - - -] Finished network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration#033[00m Dec 2 04:44:09 localhost systemd[1]: Started libpod-conmon-eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a.scope. Dec 2 04:44:09 localhost systemd[1]: Started libcrun container. Dec 2 04:44:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75be5ee0573a5f22e7c63c33e70afcb1e8e8310b3fc827f5b5a982381468071b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:09 localhost podman[262657]: 2025-12-02 09:44:09.787845373 +0000 UTC m=+0.241527876 container init eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:44:09 localhost dnsmasq[262683]: started, version 2.85 cachesize 150 Dec 2 04:44:09 localhost dnsmasq[262683]: DNS service limited to local subnets Dec 2 04:44:09 localhost dnsmasq[262683]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 04:44:09 localhost dnsmasq[262683]: warning: no upstream servers configured Dec 2 04:44:09 localhost dnsmasq-dhcp[262683]: DHCP, static leases only on 192.168.0.0, lease time 1d Dec 2 04:44:09 localhost dnsmasq[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/addn_hosts - 2 addresses Dec 2 04:44:09 localhost dnsmasq-dhcp[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/host Dec 2 04:44:09 localhost dnsmasq-dhcp[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/opts Dec 2 04:44:09 localhost podman[262657]: 2025-12-02 09:44:09.923618142 +0000 UTC m=+0.377300585 container start eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 04:44:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.966 262347 INFO neutron.agent.dhcp.agent [None req-7bb3c70d-be97-4642-9c3d-ce8b509ffd27 - - - - - -] DHCP configuration for ports {'51dc7089-37a2-48fc-93b9-4ba936552f69', '814edf37-348d-4c72-93ca-d397ec86c224', '9e501a82-0cca-4b7c-94e7-c6ee2e9d1a23', '9d9215ec-7a9b-4874-b1f4-9e2052b2af35'} is completed#033[00m Dec 2 04:44:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.989 262347 INFO neutron.agent.dhcp.agent [None req-95195c8e-baa2-412b-81ef-ca5d4173c1a3 - - - - - -] Finished network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration#033[00m Dec 2 04:44:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.989 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] Synchronizing state complete#033[00m Dec 2 04:44:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:10.055 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] DHCP agent started#033[00m Dec 2 04:44:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 09:44:10.440 262347 INFO neutron.agent.dhcp.agent [None req-0948eb2d-0fb9-4553-b15d-d99f5e3d9d5e - - - - - -] DHCP configuration for ports {'00628954-c581-410a-8676-a93c861b87a0', '814edf37-348d-4c72-93ca-d397ec86c224', '71143481-6bca-4043-aaee-4555f1b73e03', '9d9215ec-7a9b-4874-b1f4-9e2052b2af35', '9e501a82-0cca-4b7c-94e7-c6ee2e9d1a23', '51dc7089-37a2-48fc-93b9-4ba936552f69', 'd6e7da3f-8574-49e0-8ba1-2f642b3cec92', '4a318f6a-b3c1-4690-8246-f7d046ccd64a'} is completed#033[00m Dec 2 04:44:10 localhost nova_compute[229585]: 2025-12-02 09:44:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:10 localhost nova_compute[229585]: 2025-12-02 09:44:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:10 localhost nova_compute[229585]: 2025-12-02 09:44:10.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:44:11 localhost nova_compute[229585]: 2025-12-02 09:44:11.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:11 localhost nova_compute[229585]: 2025-12-02 09:44:11.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:12 localhost openstack_network_exporter[241816]: ERROR 09:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:12 localhost openstack_network_exporter[241816]: ERROR 09:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:12 localhost openstack_network_exporter[241816]: ERROR 09:44:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:44:12 localhost openstack_network_exporter[241816]: ERROR 09:44:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:44:12 localhost openstack_network_exporter[241816]: Dec 2 04:44:12 localhost openstack_network_exporter[241816]: ERROR 09:44:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:44:12 localhost openstack_network_exporter[241816]: Dec 2 04:44:13 localhost nova_compute[229585]: 2025-12-02 09:44:13.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[229585]: 2025-12-02 09:44:13.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:44:13 localhost nova_compute[229585]: 2025-12-02 09:44:13.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:44:13 localhost nova_compute[229585]: 2025-12-02 09:44:13.653 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:44:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46466 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550184F0000000001030307) Dec 2 04:44:16 localhost podman[262684]: 2025-12-02 09:44:16.128780552 +0000 UTC m=+0.132388138 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:44:16 localhost podman[262684]: 2025-12-02 09:44:16.14017705 +0000 UTC m=+0.143784586 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:44:16 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:44:16 localhost nova_compute[229585]: 2025-12-02 09:44:16.690 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:44:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46467 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5501C620000000001030307) Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.087 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.215 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.216 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12502MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.217 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.217 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.270 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.271 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.294 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.724 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:44:17 localhost nova_compute[229585]: 2025-12-02 09:44:17.729 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:44:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47919 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5501F220000000001030307) Dec 2 04:44:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:44:18 localhost nova_compute[229585]: 2025-12-02 09:44:17.997 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:44:18 localhost nova_compute[229585]: 2025-12-02 09:44:18.001 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:44:18 localhost nova_compute[229585]: 2025-12-02 09:44:18.002 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:18 localhost systemd[1]: tmp-crun.NHGSYU.mount: Deactivated successfully. Dec 2 04:44:18 localhost podman[262748]: 2025-12-02 09:44:18.095070038 +0000 UTC m=+0.095916345 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:44:18 localhost podman[262748]: 2025-12-02 09:44:18.103428053 +0000 UTC m=+0.104274320 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:44:18 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:44:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46468 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55024620000000001030307) Dec 2 04:44:19 localhost sshd[262770]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:44:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47939 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55027220000000001030307) Dec 2 04:44:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46469 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55034230000000001030307) Dec 2 04:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:44:24 localhost podman[262772]: 2025-12-02 09:44:24.076760006 +0000 UTC m=+0.077475553 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:44:24 localhost podman[262772]: 2025-12-02 09:44:24.091956949 +0000 UTC m=+0.092672476 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:44:24 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:44:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46470 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55055220000000001030307) Dec 2 04:44:33 localhost podman[239757]: time="2025-12-02T09:44:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:44:33 localhost podman[239757]: @ - - [02/Dec/2025:09:44:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:44:33 localhost podman[239757]: @ - - [02/Dec/2025:09:44:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17695 "" "Go-http-client/1.1" Dec 2 04:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:44:38 localhost systemd[1]: tmp-crun.gLdXML.mount: Deactivated successfully. Dec 2 04:44:38 localhost podman[262791]: 2025-12-02 09:44:38.078330286 +0000 UTC m=+0.084676557 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:44:38 localhost podman[262791]: 2025-12-02 09:44:38.116863777 +0000 UTC m=+0.123210058 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm) Dec 2 04:44:38 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:44:38 localhost ovn_controller[153778]: 2025-12-02T09:44:38Z|00036|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory Dec 2 04:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:44:38 localhost systemd[1]: tmp-crun.Zmn8cP.mount: Deactivated successfully. Dec 2 04:44:38 localhost podman[262810]: 2025-12-02 09:44:38.638017776 +0000 UTC m=+0.143916877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:44:38 localhost podman[262810]: 2025-12-02 09:44:38.691948087 +0000 UTC m=+0.197847138 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:44:38 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:44:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:44:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:44:40 localhost podman[262833]: 2025-12-02 09:44:40.070721108 +0000 UTC m=+0.076187788 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:44:40 localhost podman[262834]: 2025-12-02 09:44:40.129494435 +0000 UTC m=+0.128824219 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:44:40 localhost podman[262833]: 2025-12-02 09:44:40.156879838 +0000 UTC m=+0.162346518 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:44:40 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:44:40 localhost podman[262834]: 2025-12-02 09:44:40.22897869 +0000 UTC m=+0.228308504 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:44:40 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:44:42 localhost openstack_network_exporter[241816]: ERROR 09:44:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:44:42 localhost openstack_network_exporter[241816]: ERROR 09:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:42 localhost openstack_network_exporter[241816]: ERROR 09:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:42 localhost openstack_network_exporter[241816]: ERROR 09:44:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:44:42 localhost openstack_network_exporter[241816]: Dec 2 04:44:42 localhost openstack_network_exporter[241816]: ERROR 09:44:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:44:42 localhost openstack_network_exporter[241816]: Dec 2 04:44:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48898 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5508D7E0000000001030307) Dec 2 04:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:44:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48899 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55091A20000000001030307) Dec 2 04:44:47 localhost podman[262876]: 2025-12-02 09:44:47.07468385 +0000 UTC m=+0.076065824 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6) Dec 2 04:44:47 localhost podman[262876]: 2025-12-02 09:44:47.113097728 +0000 UTC m=+0.114479782 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9) Dec 2 04:44:47 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:44:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46471 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55095230000000001030307) Dec 2 04:44:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:44:49 localhost podman[262897]: 2025-12-02 09:44:49.084162291 +0000 UTC m=+0.084496331 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:44:49 localhost podman[262897]: 2025-12-02 09:44:49.099833157 +0000 UTC m=+0.100167187 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:44:49 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:44:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48900 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55099A30000000001030307) Dec 2 04:44:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47920 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5509D220000000001030307) Dec 2 04:44:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48901 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550A9620000000001030307) Dec 2 04:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:44:55 localhost podman[263006]: 2025-12-02 09:44:55.077112737 +0000 UTC m=+0.080482669 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:44:55 localhost podman[263006]: 2025-12-02 09:44:55.090947487 +0000 UTC m=+0.094317469 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:55 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:45:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48902 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550C9220000000001030307) Dec 2 04:45:03 localhost sshd[263026]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:45:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:45:03.153 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:45:03.153 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:45:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:03 localhost systemd-logind[760]: New session 59 of user zuul. Dec 2 04:45:03 localhost systemd[1]: Started Session 59 of User zuul. Dec 2 04:45:03 localhost podman[239757]: time="2025-12-02T09:45:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:45:03 localhost nova_compute[229585]: 2025-12-02 09:45:03.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:03 localhost nova_compute[229585]: 2025-12-02 09:45:03.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:45:03 localhost podman[239757]: @ - - [02/Dec/2025:09:45:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:45:03 localhost podman[239757]: @ - - [02/Dec/2025:09:45:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1" Dec 2 04:45:04 localhost python3.9[263137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:45:05 localhost python3.9[263249]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:45:05 localhost network[263266]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:45:05 localhost network[263267]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:45:05 localhost network[263268]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:45:06 localhost nova_compute[229585]: 2025-12-02 09:45:06.665 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:45:08 localhost systemd[1]: tmp-crun.ElXlTJ.mount: Deactivated successfully. Dec 2 04:45:08 localhost podman[263329]: 2025-12-02 09:45:08.278249183 +0000 UTC m=+0.105757078 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:45:08 localhost podman[263329]: 2025-12-02 09:45:08.316991381 +0000 UTC m=+0.144499316 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:08 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:45:08 localhost podman[263377]: 2025-12-02 09:45:08.834174979 +0000 UTC m=+0.091946717 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:45:08 localhost podman[263377]: 2025-12-02 09:45:08.868938747 +0000 UTC m=+0.126710445 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:45:08 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:45:10 localhost python3.9[263544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:45:10 localhost nova_compute[229585]: 2025-12-02 09:45:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:45:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:45:11 localhost systemd[1]: tmp-crun.3eKSHs.mount: Deactivated successfully. Dec 2 04:45:11 localhost podman[263570]: 2025-12-02 09:45:11.082100263 +0000 UTC m=+0.085350728 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:11 localhost podman[263570]: 2025-12-02 09:45:11.110785025 +0000 UTC m=+0.114035530 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:45:11 localhost podman[263569]: 2025-12-02 09:45:11.067746506 +0000 UTC m=+0.074766415 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:11 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:45:11 localhost podman[263569]: 2025-12-02 09:45:11.148739369 +0000 UTC m=+0.155759298 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:45:11 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:45:11 localhost python3.9[263649]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:45:11 localhost nova_compute[229585]: 2025-12-02 09:45:11.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[229585]: 2025-12-02 09:45:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[229585]: 2025-12-02 09:45:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[229585]: 2025-12-02 09:45:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:45:11 localhost nova_compute[229585]: 2025-12-02 09:45:11.657 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:45:12 localhost openstack_network_exporter[241816]: ERROR 09:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:12 localhost openstack_network_exporter[241816]: ERROR 09:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:12 localhost openstack_network_exporter[241816]: ERROR 09:45:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:45:12 localhost openstack_network_exporter[241816]: ERROR 09:45:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:45:12 localhost openstack_network_exporter[241816]: Dec 2 04:45:12 localhost openstack_network_exporter[241816]: ERROR 09:45:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:45:12 localhost openstack_network_exporter[241816]: Dec 2 04:45:12 localhost nova_compute[229585]: 2025-12-02 09:45:12.652 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:12 localhost nova_compute[229585]: 2025-12-02 09:45:12.652 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:12 localhost nova_compute[229585]: 2025-12-02 09:45:12.667 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:12 localhost nova_compute[229585]: 2025-12-02 09:45:12.667 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:12 localhost nova_compute[229585]: 2025-12-02 09:45:12.668 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:45:15 localhost python3.9[263761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:15 localhost nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:15 localhost nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:45:15 localhost nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:45:15 localhost nova_compute[229585]: 2025-12-02 09:45:15.655 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:45:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48425 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55102AE0000000001030307) Dec 2 04:45:16 localhost python3.9[263871]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.900 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.901 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.901 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.902 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:45:16 localhost nova_compute[229585]: 2025-12-02 09:45:16.902 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:45:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48426 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55106A30000000001030307) Dec 2 04:45:17 localhost python3.9[264002]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.368 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.523 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.525 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12493MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.525 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.526 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.624 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.624 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:45:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48903 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55109230000000001030307) Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.674 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.728 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.729 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.746 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.766 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:45:17 localhost nova_compute[229585]: 2025-12-02 09:45:17.791 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:45:18 localhost podman[264099]: 2025-12-02 09:45:18.095543392 +0000 UTC m=+0.085347406 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:45:18 localhost podman[264099]: 2025-12-02 09:45:18.107705413 +0000 UTC m=+0.097509507 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:45:18 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.259 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.266 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.295 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.298 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.299 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:18 localhost nova_compute[229585]: 2025-12-02 09:45:18.300 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:18 localhost python3.9[264154]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48427 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5510EA20000000001030307) Dec 2 04:45:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:45:19 localhost podman[264266]: 2025-12-02 09:45:19.254501378 +0000 UTC m=+0.083153930 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:45:19 localhost podman[264266]: 2025-12-02 09:45:19.286663317 +0000 UTC m=+0.115315849 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:45:19 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:45:19 localhost python3.9[264267]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46472 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55113220000000001030307) Dec 2 04:45:21 localhost python3.9[264401]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48428 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5511E630000000001030307) Dec 2 04:45:23 localhost python3.9[264513]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:45:23 localhost network[264530]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:45:23 localhost network[264531]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:45:23 localhost network[264532]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:45:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:45:25 localhost podman[264555]: 2025-12-02 09:45:25.384923124 +0000 UTC m=+0.064237123 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:45:25 localhost podman[264555]: 2025-12-02 09:45:25.399363744 +0000 UTC m=+0.078677783 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:45:25 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:45:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:26 localhost sshd[264608]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:45:29 localhost python3.9[264788]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:45:30 localhost python3.9[264898]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 2 04:45:30 localhost python3.9[265008]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:31 localhost python3.9[265065]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48429 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5513F220000000001030307) Dec 2 04:45:32 localhost python3.9[265175]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:32 localhost python3.9[265285]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:33 localhost python3.9[265395]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:33 localhost podman[239757]: time="2025-12-02T09:45:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:45:33 localhost podman[239757]: @ - - [02/Dec/2025:09:45:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:45:33 localhost podman[239757]: @ - - [02/Dec/2025:09:45:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1" Dec 2 04:45:34 localhost python3.9[265507]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:34 localhost python3.9[265619]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:45:35 localhost python3.9[265730]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:35 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation. Dec 2 04:45:35 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:45:35 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:45:35 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:45:35 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:45:36 localhost python3.9[265841]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:37 localhost python3.9[265951]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:37 localhost python3.9[266061]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:38 localhost python3.9[266171]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:45:38 localhost systemd[1]: tmp-crun.UKZMpR.mount: Deactivated successfully. Dec 2 04:45:38 localhost podman[266225]: 2025-12-02 09:45:38.593663695 +0000 UTC m=+0.085994897 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:45:38 localhost podman[266225]: 2025-12-02 09:45:38.603252036 +0000 UTC m=+0.095583238 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm) Dec 2 04:45:38 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:45:38 localhost python3.9[266301]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:45:39 localhost systemd[1]: tmp-crun.vkRWyk.mount: Deactivated successfully. Dec 2 04:45:39 localhost podman[266321]: 2025-12-02 09:45:39.073242769 +0000 UTC m=+0.080192690 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:45:39 localhost podman[266321]: 2025-12-02 09:45:39.087119381 +0000 UTC m=+0.094068922 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:45:39 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:45:39 localhost python3.9[266435]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:40 localhost python3.9[266545]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:40 localhost python3.9[266602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:45:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:45:41 localhost systemd[1]: tmp-crun.1frTEM.mount: Deactivated successfully. Dec 2 04:45:41 localhost podman[266714]: 2025-12-02 09:45:41.308976312 +0000 UTC m=+0.058379377 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 04:45:41 localhost podman[266713]: 2025-12-02 09:45:41.39277016 +0000 UTC m=+0.141497115 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:45:41 localhost podman[266714]: 2025-12-02 09:45:41.419919815 +0000 UTC m=+0.169322880 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:41 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:45:41 localhost python3.9[266712]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:41 localhost podman[266713]: 2025-12-02 09:45:41.475125544 +0000 UTC m=+0.223852499 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:45:41 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:45:41 localhost python3.9[266812]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:42 localhost openstack_network_exporter[241816]: ERROR 09:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:42 localhost openstack_network_exporter[241816]: ERROR 09:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:42 localhost openstack_network_exporter[241816]: ERROR 09:45:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:45:42 localhost openstack_network_exporter[241816]: ERROR 09:45:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:45:42 localhost openstack_network_exporter[241816]: Dec 2 04:45:42 localhost openstack_network_exporter[241816]: ERROR 09:45:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:45:42 localhost openstack_network_exporter[241816]: Dec 2 04:45:42 localhost python3.9[266922]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:43 localhost python3.9[267032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:43 localhost python3.9[267089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:44 localhost python3.9[267199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:44 localhost python3.9[267256]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:45 localhost python3.9[267366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:45 localhost systemd[1]: Reloading. Dec 2 04:45:45 localhost systemd-rc-local-generator[267388]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:45:45 localhost systemd-sysv-generator[267392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26088 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55177DE0000000001030307) Dec 2 04:45:46 localhost python3.9[267514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26089 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5517BE20000000001030307) Dec 2 04:45:47 localhost python3.9[267571]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:47 localhost nova_compute[229585]: 2025-12-02 09:45:47.599 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48430 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5517F220000000001030307) Dec 2 04:45:47 localhost python3.9[267681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:45:48 localhost podman[267739]: 2025-12-02 09:45:48.304382003 +0000 UTC m=+0.081644704 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:45:48 localhost podman[267739]: 2025-12-02 09:45:48.321676879 +0000 UTC m=+0.098939570 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:45:48 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:45:48 localhost python3.9[267738]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26090 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55183E20000000001030307) Dec 2 04:45:49 localhost python3.9[267867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:49 localhost systemd[1]: Reloading. Dec 2 04:45:49 localhost systemd-rc-local-generator[267892]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:45:49 localhost systemd-sysv-generator[267897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:45:49 localhost systemd[1]: Starting Create netns directory... Dec 2 04:45:49 localhost podman[267905]: 2025-12-02 09:45:49.57057108 +0000 UTC m=+0.075604260 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:45:49 localhost podman[267905]: 2025-12-02 09:45:49.584981318 +0000 UTC m=+0.090014548 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:45:49 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:45:49 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:45:49 localhost systemd[1]: Finished Create netns directory. Dec 2 04:45:49 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:45:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48904 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55187220000000001030307) Dec 2 04:45:51 localhost python3.9[268042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:52 localhost python3.9[268152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:52 localhost python3.9[268209]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55193A30000000001030307) Dec 2 04:45:53 localhost python3.9[268370]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:54 localhost python3.9[268497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:54 localhost python3.9[268554]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.a7nnnnsg recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:55 localhost python3.9[268664]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:45:55 localhost podman[268775]: 2025-12-02 09:45:55.97606022 +0000 UTC m=+0.091747990 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:45:55 localhost podman[268775]: 2025-12-02 09:45:55.991883382 +0000 UTC m=+0.107571162 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:45:56 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:45:57 localhost python3.9[268958]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 2 04:45:58 localhost python3.9[269086]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:45:59 localhost python3.9[269196]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:46:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26092 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551B3220000000001030307) Dec 2 04:46:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:46:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:46:03.155 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:46:03.155 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:03 localhost podman[239757]: time="2025-12-02T09:46:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:46:03 localhost podman[239757]: @ - - [02/Dec/2025:09:46:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:46:03 localhost podman[239757]: @ - - [02/Dec/2025:09:46:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17698 "" "Go-http-client/1.1" Dec 2 04:46:03 localhost python3[269333]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:46:04 localhost python3[269333]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",#012 "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:11:02.031267563Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249482216,#012 "VirtualSize": 249482216,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:06.113425253Z",#012 Dec 2 04:46:04 localhost python3.9[269506]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:05 localhost python3.9[269618]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:06 localhost python3.9[269673]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:06 localhost python3.9[269782]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668766.1289968-1366-235958845725721/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:07 localhost python3.9[269837]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:07 localhost nova_compute[229585]: 2025-12-02 09:46:07.805 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:46:09 localhost podman[269911]: 2025-12-02 09:46:09.079547389 +0000 UTC m=+0.068896426 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:46:09 localhost podman[269911]: 2025-12-02 09:46:09.096030701 +0000 UTC m=+0.085379718 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:46:09 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:46:09 localhost systemd[1]: tmp-crun.wpZvJN.mount: Deactivated successfully. Dec 2 04:46:09 localhost podman[269966]: 2025-12-02 09:46:09.225297942 +0000 UTC m=+0.087708348 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:46:09 localhost python3.9[269965]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:09 localhost podman[269966]: 2025-12-02 09:46:09.263956638 +0000 UTC m=+0.126367044 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:46:09 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:46:10 localhost python3.9[270099]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:10 localhost python3.9[270209]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:46:11 localhost podman[270320]: 2025-12-02 09:46:11.587006006 +0000 UTC m=+0.077218910 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:46:11 localhost podman[270321]: 2025-12-02 09:46:11.637090109 +0000 UTC m=+0.121191177 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 04:46:11 localhost nova_compute[229585]: 2025-12-02 09:46:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:11 localhost podman[270321]: 2025-12-02 09:46:11.645959338 +0000 UTC m=+0.130060386 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:46:11 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:46:11 localhost podman[270320]: 2025-12-02 09:46:11.702505748 +0000 UTC m=+0.192718572 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:46:11 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:46:11 localhost python3.9[270319]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 2 04:46:12 localhost openstack_network_exporter[241816]: ERROR 09:46:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:46:12 localhost openstack_network_exporter[241816]: ERROR 09:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:12 localhost openstack_network_exporter[241816]: ERROR 09:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:12 localhost openstack_network_exporter[241816]: ERROR 09:46:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:46:12 localhost openstack_network_exporter[241816]: Dec 2 04:46:12 localhost openstack_network_exporter[241816]: ERROR 09:46:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:46:12 localhost openstack_network_exporter[241816]: Dec 2 04:46:12 localhost python3.9[270469]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:46:12 localhost nova_compute[229585]: 2025-12-02 09:46:12.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:12 localhost nova_compute[229585]: 2025-12-02 09:46:12.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:12 localhost python3.9[270526]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:13 localhost python3.9[270636]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:13 localhost nova_compute[229585]: 2025-12-02 09:46:13.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:13 localhost nova_compute[229585]: 2025-12-02 09:46:13.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:14 localhost python3.9[270746]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:46:14 localhost nova_compute[229585]: 2025-12-02 09:46:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:14 localhost nova_compute[229585]: 2025-12-02 09:46:14.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10330 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551ED0E0000000001030307) Dec 2 04:46:16 localhost nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:16 localhost nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:46:16 localhost nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:46:16 localhost nova_compute[229585]: 2025-12-02 09:46:16.657 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:46:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10331 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F1220000000001030307) Dec 2 04:46:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26093 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F3220000000001030307) Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:46:17 localhost nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.117 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.294 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.296 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12478MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.296 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.297 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.476 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.476 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.497 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:46:18 localhost python3.9[270878]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.945 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.953 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.968 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.970 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:46:18 localhost nova_compute[229585]: 2025-12-02 09:46:18.970 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:46:19 localhost podman[270906]: 2025-12-02 09:46:19.048560594 +0000 UTC m=+0.050864838 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7) Dec 2 04:46:19 localhost podman[270906]: 2025-12-02 09:46:19.057768374 +0000 UTC m=+0.060072628 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Dec 2 04:46:19 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:46:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10332 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F9220000000001030307) Dec 2 04:46:19 localhost python3.9[271033]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:46:20 localhost podman[271051]: 2025-12-02 09:46:20.079742384 +0000 UTC m=+0.083834930 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:46:20 localhost podman[271051]: 2025-12-02 09:46:20.088511371 +0000 UTC m=+0.092603917 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:46:20 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:46:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48431 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551FD230000000001030307) Dec 2 04:46:20 localhost python3.9[271166]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:46:20 localhost systemd[1]: Reloading. Dec 2 04:46:20 localhost systemd-rc-local-generator[271192]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:46:20 localhost systemd-sysv-generator[271195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:21 localhost python3.9[271310]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:46:21 localhost network[271327]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:46:21 localhost network[271328]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:46:21 localhost network[271329]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:46:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10333 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55208E30000000001030307) Dec 2 04:46:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:46:27 localhost systemd[1]: tmp-crun.DwhRIl.mount: Deactivated successfully. Dec 2 04:46:27 localhost podman[271471]: 2025-12-02 09:46:27.090244974 +0000 UTC m=+0.090509633 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2) Dec 2 04:46:27 localhost podman[271471]: 2025-12-02 09:46:27.102114105 +0000 UTC m=+0.102378734 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd) Dec 2 04:46:27 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:46:27 localhost python3.9[271583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:29 localhost python3.9[271694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:30 localhost python3.9[271805]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:31 localhost python3.9[271916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10334 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55229220000000001030307) Dec 2 04:46:31 localhost python3.9[272027]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:32 localhost python3.9[272138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:33 localhost python3.9[272249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:33 localhost podman[239757]: time="2025-12-02T09:46:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:46:33 localhost podman[239757]: @ - - [02/Dec/2025:09:46:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:46:33 localhost podman[239757]: @ - - [02/Dec/2025:09:46:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1" Dec 2 04:46:33 localhost python3.9[272360]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:35 localhost python3.9[272471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:36 localhost python3.9[272581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:36 localhost python3.9[272691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:37 localhost python3.9[272801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:38 localhost python3.9[272911]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:38 localhost python3.9[273021]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:39 localhost sshd[273132]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:46:39 localhost python3.9[273131]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:46:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:46:39 localhost podman[273245]: 2025-12-02 09:46:39.796968425 +0000 UTC m=+0.085305502 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 04:46:39 localhost podman[273245]: 2025-12-02 09:46:39.805825566 +0000 UTC m=+0.094162603 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:46:39 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:46:39 localhost podman[273244]: 2025-12-02 09:46:39.890990853 +0000 UTC m=+0.181968351 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:46:39 localhost podman[273244]: 2025-12-02 09:46:39.901834634 +0000 UTC m=+0.192812132 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:46:39 localhost python3.9[273243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:39 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:46:40 localhost python3.9[273395]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:41 localhost python3.9[273505]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:41 localhost python3.9[273615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:46:42 localhost systemd[1]: tmp-crun.xiwr6i.mount: Deactivated successfully. Dec 2 04:46:42 localhost openstack_network_exporter[241816]: ERROR 09:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:42 localhost openstack_network_exporter[241816]: ERROR 09:46:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:46:42 localhost openstack_network_exporter[241816]: ERROR 09:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:42 localhost podman[273712]: 2025-12-02 09:46:42.096626878 +0000 UTC m=+0.091739549 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:46:42 localhost openstack_network_exporter[241816]: ERROR 09:46:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:46:42 localhost openstack_network_exporter[241816]: Dec 2 04:46:42 localhost openstack_network_exporter[241816]: ERROR 09:46:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:46:42 localhost openstack_network_exporter[241816]: Dec 2 04:46:42 localhost podman[273714]: 2025-12-02 09:46:42.115720882 +0000 UTC m=+0.105349056 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:46:42 localhost podman[273712]: 2025-12-02 09:46:42.138335904 +0000 UTC m=+0.133448575 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:46:42 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:46:42 localhost podman[273714]: 2025-12-02 09:46:42.159856902 +0000 UTC m=+0.149485036 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Dec 2 04:46:42 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:46:42 localhost python3.9[273746]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:42 localhost python3.9[273876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:43 localhost python3.9[273986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:44 localhost python3.9[274096]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:44 localhost python3.9[274206]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:45 localhost python3.9[274316]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59040 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552623E0000000001030307) Dec 2 04:46:46 localhost python3.9[274426]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:46:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59041 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55266620000000001030307) Dec 2 04:46:47 localhost python3.9[274536]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:46:47 localhost systemd[1]: Reloading. Dec 2 04:46:47 localhost systemd-rc-local-generator[274558]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:46:47 localhost systemd-sysv-generator[274564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10335 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55269230000000001030307) Dec 2 04:46:48 localhost python3.9[274682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:48 localhost python3.9[274793]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59042 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5526E620000000001030307) Dec 2 04:46:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26094 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55271220000000001030307) Dec 2 04:46:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:46:50 localhost podman[274854]: 2025-12-02 09:46:50.149994772 +0000 UTC m=+0.140496002 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm) Dec 2 04:46:50 localhost podman[274854]: 2025-12-02 09:46:50.197037311 +0000 UTC m=+0.187538601 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:46:50 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:46:50 localhost podman[274920]: 2025-12-02 09:46:50.297089303 +0000 UTC m=+0.138478249 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:46:50 localhost podman[274920]: 2025-12-02 09:46:50.332017813 +0000 UTC m=+0.173406799 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:46:50 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:46:50 localhost python3.9[274933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:50 localhost python3.9[275054]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:51 localhost python3.9[275165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:52 localhost python3.9[275276]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59043 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5527E220000000001030307) Dec 2 04:46:53 localhost python3.9[275387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:55 localhost python3.9[275498]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:46:57 localhost systemd[1]: tmp-crun.9UxryQ.mount: Deactivated successfully. Dec 2 04:46:57 localhost podman[275610]: 2025-12-02 09:46:57.307526238 +0000 UTC m=+0.089360746 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:46:57 localhost podman[275610]: 2025-12-02 09:46:57.319811763 +0000 UTC m=+0.101646251 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:46:57 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:46:57 localhost python3.9[275609]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:57 localhost python3.9[275754]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:58 localhost python3.9[275899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:59 localhost python3.9[276061]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:59 localhost python3.9[276189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:00 localhost python3.9[276299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:00 localhost python3.9[276409]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:01 localhost python3.9[276519]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59044 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5529F220000000001030307) Dec 2 04:47:02 localhost python3.9[276629]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:02 localhost python3.9[276757]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:47:03.157 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:47:03.157 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:47:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:03 localhost podman[239757]: time="2025-12-02T09:47:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:47:03 localhost podman[239757]: @ - - [02/Dec/2025:09:47:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:47:03 localhost podman[239757]: @ - - [02/Dec/2025:09:47:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17700 "" "Go-http-client/1.1" Dec 2 04:47:08 localhost python3.9[276867]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 2 04:47:09 localhost sshd[276886]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:47:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:47:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:47:10 localhost systemd-logind[760]: New session 60 of user zuul. Dec 2 04:47:10 localhost systemd[1]: Started Session 60 of User zuul. Dec 2 04:47:10 localhost podman[276889]: 2025-12-02 09:47:10.153080575 +0000 UTC m=+0.154089977 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:47:10 localhost podman[276888]: 2025-12-02 09:47:10.109124129 +0000 UTC m=+0.112167234 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:47:10 localhost podman[276889]: 2025-12-02 09:47:10.166258268 +0000 UTC m=+0.167267650 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:47:10 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:47:10 localhost podman[276888]: 2025-12-02 09:47:10.195045439 +0000 UTC m=+0.198088534 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:47:10 localhost systemd[1]: session-60.scope: Deactivated successfully. Dec 2 04:47:10 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Dec 2 04:47:10 localhost systemd-logind[760]: Removed session 60. Dec 2 04:47:10 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:47:10 localhost python3.9[277041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:10 localhost nova_compute[229585]: 2025-12-02 09:47:10.972 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:47:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:47:11 localhost python3.9[277127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668830.410593-3041-20451610905541/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:11 localhost nova_compute[229585]: 2025-12-02 09:47:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:12 localhost openstack_network_exporter[241816]: ERROR 09:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:12 localhost openstack_network_exporter[241816]: ERROR 09:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:12 localhost openstack_network_exporter[241816]: ERROR 09:47:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:47:12 localhost openstack_network_exporter[241816]: ERROR 09:47:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:47:12 localhost openstack_network_exporter[241816]: Dec 2 04:47:12 localhost openstack_network_exporter[241816]: ERROR 09:47:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:47:12 localhost openstack_network_exporter[241816]: Dec 2 04:47:12 localhost python3.9[277235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:12 localhost python3.9[277290]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:12 localhost nova_compute[229585]: 2025-12-02 09:47:12.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:47:13 localhost podman[277399]: 2025-12-02 09:47:13.08042647 +0000 UTC m=+0.082738834 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:47:13 localhost podman[277399]: 2025-12-02 09:47:13.087820297 +0000 UTC m=+0.090132651 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true) Dec 2 04:47:13 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:47:13 localhost podman[277400]: 2025-12-02 09:47:13.144775389 +0000 UTC m=+0.142029618 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 2 04:47:13 localhost python3.9[277398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:13 localhost podman[277400]: 2025-12-02 09:47:13.251720762 +0000 UTC m=+0.248974951 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:47:13 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:47:13 localhost python3.9[277525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668832.7037854-3041-208133948468060/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:14 localhost python3.9[277633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:14 localhost nova_compute[229585]: 2025-12-02 09:47:14.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:14 localhost nova_compute[229585]: 2025-12-02 09:47:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:14 localhost nova_compute[229585]: 2025-12-02 09:47:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:14 localhost python3.9[277719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668833.7850075-3041-258902145269385/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=2618deabb92e3bb6763a4ba7147e78332a2d3a7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:15 localhost python3.9[277827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:15 localhost nova_compute[229585]: 2025-12-02 09:47:15.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:15 localhost nova_compute[229585]: 2025-12-02 09:47:15.640 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:47:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:47:15 localhost python3.9[277913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668834.940844-3041-77157463090852/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10172 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552D76E0000000001030307) Dec 2 04:47:16 localhost python3.9[278021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:16 localhost nova_compute[229585]: 2025-12-02 09:47:16.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10173 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552DB630000000001030307) Dec 2 04:47:17 localhost python3.9[278107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668836.005264-3041-121873187305884/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.211 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.657 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.657 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.658 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.658 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:47:17 localhost nova_compute[229585]: 2025-12-02 09:47:17.659 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:17 localhost python3.9[278218]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59045 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552DF220000000001030307) Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.129 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.312 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12466MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.376 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.376 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.398 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:18 localhost python3.9[278349]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.882 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.887 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.904 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.907 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:47:18 localhost nova_compute[229585]: 2025-12-02 09:47:18.907 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10174 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552E3630000000001030307) Dec 2 04:47:19 localhost python3.9[278481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:19 localhost python3.9[278593]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10336 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552E7220000000001030307) Dec 2 04:47:20 localhost python3.9[278701]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:47:21 localhost podman[278738]: 2025-12-02 09:47:21.082587217 +0000 UTC m=+0.080623808 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:47:21 localhost podman[278738]: 2025-12-02 09:47:21.093954336 +0000 UTC m=+0.091990957 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=) Dec 2 04:47:21 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:47:21 localhost podman[278737]: 2025-12-02 09:47:21.130653589 +0000 UTC m=+0.129634360 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:47:21 localhost podman[278737]: 2025-12-02 09:47:21.168888878 +0000 UTC m=+0.167869659 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:47:21 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:47:21 localhost python3.9[278854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:21 localhost python3.9[278909]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:22 localhost python3.9[279017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:22 localhost python3.9[279072]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10175 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552F3220000000001030307) Dec 2 04:47:24 localhost python3.9[279182]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 2 04:47:25 localhost python3.9[279292]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:47:25 localhost python3[279402]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:47:26 localhost python3[279402]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:47:27 localhost python3.9[279575]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:47:28 localhost podman[279595]: 2025-12-02 09:47:28.083762188 +0000 UTC m=+0.079417412 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:47:28 localhost podman[279595]: 2025-12-02 09:47:28.094686453 +0000 UTC m=+0.090341687 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:47:28 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:47:28 localhost python3.9[279706]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 2 04:47:29 localhost python3.9[279816]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:47:30 localhost python3[279926]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:47:30 localhost python3[279926]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:47:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10176 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55313230000000001030307) Dec 2 04:47:31 localhost python3.9[280097]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:32 localhost python3.9[280209]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:32 localhost python3.9[280318]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668852.2447531-3717-210465183495401/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:33 localhost python3.9[280373]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:47:33 localhost podman[239757]: time="2025-12-02T09:47:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:47:33 localhost podman[239757]: @ - - [02/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:47:33 localhost podman[239757]: @ - - [02/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1" Dec 2 04:47:34 localhost python3.9[280483]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:35 localhost python3.9[280591]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:36 localhost python3.9[280699]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:37 localhost python3.9[280809]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:47:37 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 103.9 (346 of 333 items), suggesting rotation. Dec 2 04:47:37 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:47:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:47:37 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:47:38 localhost python3.9[280943]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:47:39 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:47:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:47:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:47:41 localhost podman[280961]: 2025-12-02 09:47:41.088043722 +0000 UTC m=+0.087772239 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:47:41 localhost podman[280961]: 2025-12-02 09:47:41.128937533 +0000 UTC m=+0.128666020 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:47:41 localhost podman[280962]: 2025-12-02 09:47:41.138067723 +0000 UTC m=+0.135461568 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:47:41 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:47:41 localhost podman[280962]: 2025-12-02 09:47:41.152829164 +0000 UTC m=+0.150223049 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:47:41 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:47:41 localhost nova_compute[229585]: 2025-12-02 09:47:41.874 229589 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Dec 2 04:47:41 localhost nova_compute[229585]: 2025-12-02 09:47:41.877 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:47:41 localhost nova_compute[229585]: 2025-12-02 09:47:41.878 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:47:41 localhost nova_compute[229585]: 2025-12-02 09:47:41.878 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:47:42 localhost openstack_network_exporter[241816]: ERROR 09:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:42 localhost openstack_network_exporter[241816]: ERROR 09:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:42 localhost openstack_network_exporter[241816]: ERROR 09:47:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:47:42 localhost openstack_network_exporter[241816]: ERROR 09:47:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:47:42 localhost openstack_network_exporter[241816]: Dec 2 04:47:42 localhost openstack_network_exporter[241816]: ERROR 09:47:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:47:42 localhost openstack_network_exporter[241816]: Dec 2 04:47:42 localhost journal[228953]: End of file while reading data: Input/output error Dec 2 04:47:42 localhost systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Deactivated successfully. Dec 2 04:47:42 localhost systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Consumed 17.770s CPU time. Dec 2 04:47:42 localhost podman[280947]: 2025-12-02 09:47:42.287149762 +0000 UTC m=+2.935368792 container died e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256-userdata-shm.mount: Deactivated successfully. Dec 2 04:47:42 localhost podman[280947]: 2025-12-02 09:47:42.423919958 +0000 UTC m=+3.072138968 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:47:42 localhost podman[280947]: nova_compute Dec 2 04:47:42 localhost podman[281029]: error opening file `/run/crun/e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256/status`: No such file or directory Dec 2 04:47:42 localhost podman[281016]: 2025-12-02 09:47:42.493159037 +0000 UTC m=+0.042096470 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 04:47:42 localhost podman[281016]: nova_compute Dec 2 04:47:42 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 2 04:47:42 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:47:42 localhost systemd[1]: Starting nova_compute container... Dec 2 04:47:42 localhost systemd[1]: Started libcrun container. Dec 2 04:47:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:42 localhost podman[281031]: 2025-12-02 09:47:42.622227677 +0000 UTC m=+0.101504518 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:47:42 localhost podman[281031]: 2025-12-02 09:47:42.630929393 +0000 UTC m=+0.110206234 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Dec 2 04:47:42 localhost podman[281031]: nova_compute Dec 2 04:47:42 localhost nova_compute[281045]: + sudo -E kolla_set_configs Dec 2 04:47:42 localhost systemd[1]: Started nova_compute container. Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Validating config file Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying service configuration files Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /etc/ceph Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Writing out command to execute Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:42 localhost nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:47:42 localhost nova_compute[281045]: ++ cat /run_command Dec 2 04:47:42 localhost nova_compute[281045]: + CMD=nova-compute Dec 2 04:47:42 localhost nova_compute[281045]: + ARGS= Dec 2 04:47:42 localhost nova_compute[281045]: + sudo kolla_copy_cacerts Dec 2 04:47:42 localhost nova_compute[281045]: + [[ ! -n '' ]] Dec 2 04:47:42 localhost nova_compute[281045]: + . kolla_extend_start Dec 2 04:47:42 localhost nova_compute[281045]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:47:42 localhost nova_compute[281045]: Running command: 'nova-compute' Dec 2 04:47:42 localhost nova_compute[281045]: + umask 0022 Dec 2 04:47:42 localhost nova_compute[281045]: + exec nova-compute Dec 2 04:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:47:43 localhost podman[281082]: 2025-12-02 09:47:43.334198029 +0000 UTC m=+0.084272572 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:47:43 localhost podman[281082]: 2025-12-02 09:47:43.364910458 +0000 UTC m=+0.114984961 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:47:43 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:47:43 localhost podman[281118]: 2025-12-02 09:47:43.443304747 +0000 UTC m=+0.121430917 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:47:43 localhost podman[281118]: 2025-12-02 09:47:43.531891789 +0000 UTC m=+0.210017949 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:47:43 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:47:43 localhost python3.9[281208]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:47:44 localhost systemd[1]: Started libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope. Dec 2 04:47:44 localhost systemd[1]: Started libcrun container. Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost podman[281232]: 2025-12-02 09:47:44.111825918 +0000 UTC m=+0.101466206 container init 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 2 04:47:44 localhost podman[281232]: 2025-12-02 09:47:44.123674191 +0000 UTC m=+0.113314469 container start 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:47:44 localhost python3.9[281208]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Applying nova statedir ownership Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 2 04:47:44 localhost nova_compute_init[281252]: INFO:nova_statedir:Nova statedir ownership complete Dec 2 04:47:44 localhost systemd[1]: libpod-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully. Dec 2 04:47:44 localhost podman[281267]: 2025-12-02 09:47:44.289371282 +0000 UTC m=+0.078653008 container died 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:47:44 localhost systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully. Dec 2 04:47:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e-userdata-shm.mount: Deactivated successfully. Dec 2 04:47:44 localhost podman[281267]: 2025-12-02 09:47:44.315828942 +0000 UTC m=+0.105110648 container cleanup 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2) Dec 2 04:47:44 localhost systemd[1]: libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully. Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.384 281049 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.385 281049 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.385 281049 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.385 281049 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.496 281049 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.519 281049 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.519 281049 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:47:44 localhost nova_compute[281045]: 2025-12-02 09:47:44.925 281049 INFO nova.virt.driver [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.058 281049 INFO nova.compute.provider_config [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:47:45 localhost systemd[1]: session-59.scope: Deactivated successfully. Dec 2 04:47:45 localhost systemd[1]: session-59.scope: Consumed 1min 29.058s CPU time. Dec 2 04:47:45 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.068 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:47:45 localhost systemd-logind[760]: Removed session 59. Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.069 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.070 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.071 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console_host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.082 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.082 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] host = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.094 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.094 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.097 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.097 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.106 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.106 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.109 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.109 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.113 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.113 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.116 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.116 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.118 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.118 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost sshd[281312]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.178 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.178 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.182 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.183 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.183 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.184 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.184 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.185 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.185 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.186 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.186 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.190 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.190 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.194 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.194 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.196 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.196 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.197 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.197 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.199 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.199 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.202 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.202 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.204 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.204 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.205 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.205 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.205 281049 WARNING oslo_config.cfg [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:47:45 localhost nova_compute[281045]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:47:45 localhost nova_compute[281045]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:47:45 localhost nova_compute[281045]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:47:45 localhost nova_compute[281045]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.206 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.206 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.207 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.207 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.209 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.209 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.211 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.211 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.213 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.213 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.215 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.215 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.217 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.217 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.218 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.218 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.220 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.220 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.222 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.222 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.224 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.224 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.226 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.226 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.228 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.228 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.313 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.314 281049 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.325 281049 INFO nova.virt.node [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.326 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.335 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.337 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.337 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.341 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host capabilities Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 64aa5208-7bf7-490c-857b-3c1a3cae8bb3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: x86_64 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v4 Dec 2 04:47:45 localhost nova_compute[281045]: AMD Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tcp Dec 2 04:47:45 localhost nova_compute[281045]: rdma Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 16116612 Dec 2 04:47:45 localhost nova_compute[281045]: 4029153 Dec 2 04:47:45 localhost nova_compute[281045]: 0 Dec 2 04:47:45 localhost nova_compute[281045]: 0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: selinux Dec 2 04:47:45 localhost nova_compute[281045]: 0 Dec 2 04:47:45 localhost nova_compute[281045]: system_u:system_r:svirt_t:s0 Dec 2 04:47:45 localhost nova_compute[281045]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: dac Dec 2 04:47:45 localhost nova_compute[281045]: 0 Dec 2 04:47:45 localhost nova_compute[281045]: +107:+107 Dec 2 04:47:45 localhost nova_compute[281045]: +107:+107 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: hvm Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 32 Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-i440fx-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.8.0 Dec 2 04:47:45 localhost nova_compute[281045]: q35 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.4.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.5.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.3.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.4.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.2.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.2.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.0.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.0.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.1.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: hvm Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 64 Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-i440fx-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.8.0 Dec 2 04:47:45 localhost nova_compute[281045]: q35 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.4.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.5.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.3.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.4.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.2.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.2.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.0.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.0.0 Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel8.1.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: #033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.347 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.348 281049 DEBUG nova.virt.libvirt.volume.mount [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.351 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-i440fx-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: i686 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: rom Dec 2 04:47:45 localhost nova_compute[281045]: pflash Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: yes Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: AMD Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 486 Dec 2 04:47:45 localhost nova_compute[281045]: 486-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Conroe Dec 2 04:47:45 localhost nova_compute[281045]: Conroe-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-IBPB Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v4 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v1 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v2 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v6 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v7 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Penryn Dec 2 04:47:45 localhost nova_compute[281045]: Penryn-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Westmere Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v2 Dec 2 04:47:45 localhost nova_compute[281045]: athlon Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: athlon-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: kvm32 Dec 2 04:47:45 localhost nova_compute[281045]: kvm32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: n270 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: n270-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pentium Dec 2 04:47:45 localhost nova_compute[281045]: pentium-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: phenom Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: phenom-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu32 Dec 2 04:47:45 localhost nova_compute[281045]: qemu32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: anonymous Dec 2 04:47:45 localhost nova_compute[281045]: memfd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: disk Dec 2 04:47:45 localhost nova_compute[281045]: cdrom Dec 2 04:47:45 localhost nova_compute[281045]: floppy Dec 2 04:47:45 localhost nova_compute[281045]: lun Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: ide Dec 2 04:47:45 localhost nova_compute[281045]: fdc Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: sata Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: vnc Dec 2 04:47:45 localhost nova_compute[281045]: egl-headless Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: subsystem Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: mandatory Dec 2 04:47:45 localhost nova_compute[281045]: requisite Dec 2 04:47:45 localhost nova_compute[281045]: optional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: pci Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: random Dec 2 04:47:45 localhost nova_compute[281045]: egd Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: path Dec 2 04:47:45 localhost nova_compute[281045]: handle Dec 2 04:47:45 localhost nova_compute[281045]: virtiofs Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tpm-tis Dec 2 04:47:45 localhost nova_compute[281045]: tpm-crb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: emulator Dec 2 04:47:45 localhost nova_compute[281045]: external Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 2.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: passt Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: isa Dec 2 04:47:45 localhost nova_compute[281045]: hyperv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: null Dec 2 04:47:45 localhost nova_compute[281045]: vc Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: dev Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: pipe Dec 2 04:47:45 localhost nova_compute[281045]: stdio Dec 2 04:47:45 localhost nova_compute[281045]: udp Dec 2 04:47:45 localhost nova_compute[281045]: tcp Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: qemu-vdagent Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: relaxed Dec 2 04:47:45 localhost nova_compute[281045]: vapic Dec 2 04:47:45 localhost nova_compute[281045]: spinlocks Dec 2 04:47:45 localhost nova_compute[281045]: vpindex Dec 2 04:47:45 localhost nova_compute[281045]: runtime Dec 2 04:47:45 localhost nova_compute[281045]: synic Dec 2 04:47:45 localhost nova_compute[281045]: stimer Dec 2 04:47:45 localhost nova_compute[281045]: reset Dec 2 04:47:45 localhost nova_compute[281045]: vendor_id Dec 2 04:47:45 localhost nova_compute[281045]: frequencies Dec 2 04:47:45 localhost nova_compute[281045]: reenlightenment Dec 2 04:47:45 localhost nova_compute[281045]: tlbflush Dec 2 04:47:45 localhost nova_compute[281045]: ipi Dec 2 04:47:45 localhost nova_compute[281045]: avic Dec 2 04:47:45 localhost nova_compute[281045]: emsr_bitmap Dec 2 04:47:45 localhost nova_compute[281045]: xmm_input Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 4095 Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Linux KVM Hv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tdx Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.356 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.8.0 Dec 2 04:47:45 localhost nova_compute[281045]: i686 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: rom Dec 2 04:47:45 localhost nova_compute[281045]: pflash Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: yes Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: AMD Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 486 Dec 2 04:47:45 localhost nova_compute[281045]: 486-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Conroe Dec 2 04:47:45 localhost nova_compute[281045]: Conroe-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-IBPB Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v4 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v1 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v2 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v6 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v7 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Penryn Dec 2 04:47:45 localhost nova_compute[281045]: Penryn-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Westmere Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v2 Dec 2 04:47:45 localhost nova_compute[281045]: athlon Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: athlon-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: kvm32 Dec 2 04:47:45 localhost nova_compute[281045]: kvm32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: n270 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: n270-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pentium Dec 2 04:47:45 localhost nova_compute[281045]: pentium-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: phenom Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: phenom-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu32 Dec 2 04:47:45 localhost nova_compute[281045]: qemu32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: anonymous Dec 2 04:47:45 localhost nova_compute[281045]: memfd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: disk Dec 2 04:47:45 localhost nova_compute[281045]: cdrom Dec 2 04:47:45 localhost nova_compute[281045]: floppy Dec 2 04:47:45 localhost nova_compute[281045]: lun Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: fdc Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: sata Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: vnc Dec 2 04:47:45 localhost nova_compute[281045]: egl-headless Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: subsystem Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: mandatory Dec 2 04:47:45 localhost nova_compute[281045]: requisite Dec 2 04:47:45 localhost nova_compute[281045]: optional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: pci Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: random Dec 2 04:47:45 localhost nova_compute[281045]: egd Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: path Dec 2 04:47:45 localhost nova_compute[281045]: handle Dec 2 04:47:45 localhost nova_compute[281045]: virtiofs Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tpm-tis Dec 2 04:47:45 localhost nova_compute[281045]: tpm-crb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: emulator Dec 2 04:47:45 localhost nova_compute[281045]: external Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 2.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: passt Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: isa Dec 2 04:47:45 localhost nova_compute[281045]: hyperv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: null Dec 2 04:47:45 localhost nova_compute[281045]: vc Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: dev Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: pipe Dec 2 04:47:45 localhost nova_compute[281045]: stdio Dec 2 04:47:45 localhost nova_compute[281045]: udp Dec 2 04:47:45 localhost nova_compute[281045]: tcp Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: qemu-vdagent Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: relaxed Dec 2 04:47:45 localhost nova_compute[281045]: vapic Dec 2 04:47:45 localhost nova_compute[281045]: spinlocks Dec 2 04:47:45 localhost nova_compute[281045]: vpindex Dec 2 04:47:45 localhost nova_compute[281045]: runtime Dec 2 04:47:45 localhost nova_compute[281045]: synic Dec 2 04:47:45 localhost nova_compute[281045]: stimer Dec 2 04:47:45 localhost nova_compute[281045]: reset Dec 2 04:47:45 localhost nova_compute[281045]: vendor_id Dec 2 04:47:45 localhost nova_compute[281045]: frequencies Dec 2 04:47:45 localhost nova_compute[281045]: reenlightenment Dec 2 04:47:45 localhost nova_compute[281045]: tlbflush Dec 2 04:47:45 localhost nova_compute[281045]: ipi Dec 2 04:47:45 localhost nova_compute[281045]: avic Dec 2 04:47:45 localhost nova_compute[281045]: emsr_bitmap Dec 2 04:47:45 localhost nova_compute[281045]: xmm_input Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 4095 Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Linux KVM Hv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tdx Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.375 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.379 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-i440fx-rhel7.6.0 Dec 2 04:47:45 localhost nova_compute[281045]: x86_64 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: rom Dec 2 04:47:45 localhost nova_compute[281045]: pflash Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: yes Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: AMD Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 486 Dec 2 04:47:45 localhost nova_compute[281045]: 486-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Conroe Dec 2 04:47:45 localhost nova_compute[281045]: Conroe-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-IBPB Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v4 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v1 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v2 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v6 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v7 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Penryn Dec 2 04:47:45 localhost nova_compute[281045]: Penryn-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Westmere Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v2 Dec 2 04:47:45 localhost nova_compute[281045]: athlon Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: athlon-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: kvm32 Dec 2 04:47:45 localhost nova_compute[281045]: kvm32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: n270 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: n270-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pentium Dec 2 04:47:45 localhost nova_compute[281045]: pentium-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: phenom Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: phenom-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu32 Dec 2 04:47:45 localhost nova_compute[281045]: qemu32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: anonymous Dec 2 04:47:45 localhost nova_compute[281045]: memfd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: disk Dec 2 04:47:45 localhost nova_compute[281045]: cdrom Dec 2 04:47:45 localhost nova_compute[281045]: floppy Dec 2 04:47:45 localhost nova_compute[281045]: lun Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: ide Dec 2 04:47:45 localhost nova_compute[281045]: fdc Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: sata Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: vnc Dec 2 04:47:45 localhost nova_compute[281045]: egl-headless Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: subsystem Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: mandatory Dec 2 04:47:45 localhost nova_compute[281045]: requisite Dec 2 04:47:45 localhost nova_compute[281045]: optional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: pci Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: random Dec 2 04:47:45 localhost nova_compute[281045]: egd Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: path Dec 2 04:47:45 localhost nova_compute[281045]: handle Dec 2 04:47:45 localhost nova_compute[281045]: virtiofs Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tpm-tis Dec 2 04:47:45 localhost nova_compute[281045]: tpm-crb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: emulator Dec 2 04:47:45 localhost nova_compute[281045]: external Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 2.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: passt Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: isa Dec 2 04:47:45 localhost nova_compute[281045]: hyperv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: null Dec 2 04:47:45 localhost nova_compute[281045]: vc Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: dev Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: pipe Dec 2 04:47:45 localhost nova_compute[281045]: stdio Dec 2 04:47:45 localhost nova_compute[281045]: udp Dec 2 04:47:45 localhost nova_compute[281045]: tcp Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: qemu-vdagent Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: relaxed Dec 2 04:47:45 localhost nova_compute[281045]: vapic Dec 2 04:47:45 localhost nova_compute[281045]: spinlocks Dec 2 04:47:45 localhost nova_compute[281045]: vpindex Dec 2 04:47:45 localhost nova_compute[281045]: runtime Dec 2 04:47:45 localhost nova_compute[281045]: synic Dec 2 04:47:45 localhost nova_compute[281045]: stimer Dec 2 04:47:45 localhost nova_compute[281045]: reset Dec 2 04:47:45 localhost nova_compute[281045]: vendor_id Dec 2 04:47:45 localhost nova_compute[281045]: frequencies Dec 2 04:47:45 localhost nova_compute[281045]: reenlightenment Dec 2 04:47:45 localhost nova_compute[281045]: tlbflush Dec 2 04:47:45 localhost nova_compute[281045]: ipi Dec 2 04:47:45 localhost nova_compute[281045]: avic Dec 2 04:47:45 localhost nova_compute[281045]: emsr_bitmap Dec 2 04:47:45 localhost nova_compute[281045]: xmm_input Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 4095 Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Linux KVM Hv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tdx Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.427 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/libexec/qemu-kvm Dec 2 04:47:45 localhost nova_compute[281045]: kvm Dec 2 04:47:45 localhost nova_compute[281045]: pc-q35-rhel9.8.0 Dec 2 04:47:45 localhost nova_compute[281045]: x86_64 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: efi Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:47:45 localhost nova_compute[281045]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: rom Dec 2 04:47:45 localhost nova_compute[281045]: pflash Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: yes Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: yes Dec 2 04:47:45 localhost nova_compute[281045]: no Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: AMD Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 486 Dec 2 04:47:45 localhost nova_compute[281045]: 486-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Broadwell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cascadelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Conroe Dec 2 04:47:45 localhost nova_compute[281045]: Conroe-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Cooperlake-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Denverton-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dhyana-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Genoa-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-IBPB Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Milan-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-Rome-v4 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v1 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v2 Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: EPYC-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: GraniteRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Haswell-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-noTSX Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v6 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Icelake-Server-v7 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: IvyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: KnightsMill-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Nehalem-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G1-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G4-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Opteron_G5-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Penryn Dec 2 04:47:45 localhost nova_compute[281045]: Penryn-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: SandyBridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SapphireRapids-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: SierraForest-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Client-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-noTSX-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Skylake-Server-v5 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v2 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v3 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Snowridge-v4 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Westmere Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-IBRS Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Westmere-v2 Dec 2 04:47:45 localhost nova_compute[281045]: athlon Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: athlon-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: core2duo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: coreduo-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: kvm32 Dec 2 04:47:45 localhost nova_compute[281045]: kvm32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64 Dec 2 04:47:45 localhost nova_compute[281045]: kvm64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: n270 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: n270-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pentium Dec 2 04:47:45 localhost nova_compute[281045]: pentium-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2 Dec 2 04:47:45 localhost nova_compute[281045]: pentium2-v1 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3 Dec 2 04:47:45 localhost nova_compute[281045]: pentium3-v1 Dec 2 04:47:45 localhost nova_compute[281045]: phenom Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: phenom-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu32 Dec 2 04:47:45 localhost nova_compute[281045]: qemu32-v1 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64 Dec 2 04:47:45 localhost nova_compute[281045]: qemu64-v1 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: anonymous Dec 2 04:47:45 localhost nova_compute[281045]: memfd Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: disk Dec 2 04:47:45 localhost nova_compute[281045]: cdrom Dec 2 04:47:45 localhost nova_compute[281045]: floppy Dec 2 04:47:45 localhost nova_compute[281045]: lun Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: fdc Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: sata Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: vnc Dec 2 04:47:45 localhost nova_compute[281045]: egl-headless Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: subsystem Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: mandatory Dec 2 04:47:45 localhost nova_compute[281045]: requisite Dec 2 04:47:45 localhost nova_compute[281045]: optional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: pci Dec 2 04:47:45 localhost nova_compute[281045]: scsi Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: virtio Dec 2 04:47:45 localhost nova_compute[281045]: virtio-transitional Dec 2 04:47:45 localhost nova_compute[281045]: virtio-non-transitional Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: random Dec 2 04:47:45 localhost nova_compute[281045]: egd Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: path Dec 2 04:47:45 localhost nova_compute[281045]: handle Dec 2 04:47:45 localhost nova_compute[281045]: virtiofs Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tpm-tis Dec 2 04:47:45 localhost nova_compute[281045]: tpm-crb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: emulator Dec 2 04:47:45 localhost nova_compute[281045]: external Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 2.0 Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: usb Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: qemu Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: builtin Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: default Dec 2 04:47:45 localhost nova_compute[281045]: passt Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: isa Dec 2 04:47:45 localhost nova_compute[281045]: hyperv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: null Dec 2 04:47:45 localhost nova_compute[281045]: vc Dec 2 04:47:45 localhost nova_compute[281045]: pty Dec 2 04:47:45 localhost nova_compute[281045]: dev Dec 2 04:47:45 localhost nova_compute[281045]: file Dec 2 04:47:45 localhost nova_compute[281045]: pipe Dec 2 04:47:45 localhost nova_compute[281045]: stdio Dec 2 04:47:45 localhost nova_compute[281045]: udp Dec 2 04:47:45 localhost nova_compute[281045]: tcp Dec 2 04:47:45 localhost nova_compute[281045]: unix Dec 2 04:47:45 localhost nova_compute[281045]: qemu-vdagent Dec 2 04:47:45 localhost nova_compute[281045]: dbus Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: relaxed Dec 2 04:47:45 localhost nova_compute[281045]: vapic Dec 2 04:47:45 localhost nova_compute[281045]: spinlocks Dec 2 04:47:45 localhost nova_compute[281045]: vpindex Dec 2 04:47:45 localhost nova_compute[281045]: runtime Dec 2 04:47:45 localhost nova_compute[281045]: synic Dec 2 04:47:45 localhost nova_compute[281045]: stimer Dec 2 04:47:45 localhost nova_compute[281045]: reset Dec 2 04:47:45 localhost nova_compute[281045]: vendor_id Dec 2 04:47:45 localhost nova_compute[281045]: frequencies Dec 2 04:47:45 localhost nova_compute[281045]: reenlightenment Dec 2 04:47:45 localhost nova_compute[281045]: tlbflush Dec 2 04:47:45 localhost nova_compute[281045]: ipi Dec 2 04:47:45 localhost nova_compute[281045]: avic Dec 2 04:47:45 localhost nova_compute[281045]: emsr_bitmap Dec 2 04:47:45 localhost nova_compute[281045]: xmm_input Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: 4095 Dec 2 04:47:45 localhost nova_compute[281045]: on Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: off Dec 2 04:47:45 localhost nova_compute[281045]: Linux KVM Hv Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: tdx Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: Dec 2 04:47:45 localhost nova_compute[281045]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.480 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.481 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.481 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.482 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Secure Boot support detected#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.484 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.484 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.494 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.510 281049 INFO nova.virt.node [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.526 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.553 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.610 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:47:45 localhost nova_compute[281045]: 2025-12-02 09:47:45.610 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26085 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5534C9E0000000001030307) Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.084 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.211 281049 WARNING nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.212 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12513MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.212 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.213 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.305 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.306 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.321 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.384 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.384 281049 DEBUG nova.compute.provider_tree [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.398 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.420 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.437 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.861 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.866 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 2 04:47:46 localhost nova_compute[281045]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.867 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.868 281049 DEBUG nova.compute.provider_tree [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.869 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.896 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.923 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.923 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.924 281049 DEBUG nova.service [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.959 281049 DEBUG nova.service [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 2 04:47:46 localhost nova_compute[281045]: 2025-12-02 09:47:46.960 281049 DEBUG nova.servicegroup.drivers.db [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] DB_Driver: join new ServiceGroup member np0005541914.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 2 04:47:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26086 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55350A30000000001030307) Dec 2 04:47:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10177 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55353220000000001030307) Dec 2 04:47:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26087 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55358A20000000001030307) Dec 2 04:47:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59046 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5535D220000000001030307) Dec 2 04:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:47:52 localhost podman[281379]: 2025-12-02 09:47:52.091075782 +0000 UTC m=+0.089470129 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:47:52 localhost podman[281379]: 2025-12-02 09:47:52.101577744 +0000 UTC m=+0.099972121 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:47:52 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:47:52 localhost podman[281380]: 2025-12-02 09:47:52.198498461 +0000 UTC m=+0.196843046 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:47:52 localhost podman[281380]: 2025-12-02 09:47:52.21218526 +0000 UTC m=+0.210529835 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 04:47:52 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:47:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26088 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55368620000000001030307) Dec 2 04:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:47:59 localhost systemd[1]: tmp-crun.OzSf8X.mount: Deactivated successfully. Dec 2 04:47:59 localhost podman[281422]: 2025-12-02 09:47:59.085798145 +0000 UTC m=+0.080599437 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:47:59 localhost podman[281422]: 2025-12-02 09:47:59.096303856 +0000 UTC m=+0.091105138 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:47:59 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:48:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26089 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55389220000000001030307) Dec 2 04:48:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:03 localhost podman[239757]: time="2025-12-02T09:48:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:48:03 localhost podman[239757]: @ - - [02/Dec/2025:09:48:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:48:03 localhost podman[239757]: @ - - [02/Dec/2025:09:48:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1" Dec 2 04:48:04 localhost podman[281587]: Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.05322295 +0000 UTC m=+0.073081658 container create b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.41.4, release=1763362218) Dec 2 04:48:04 localhost systemd[1]: Started libpod-conmon-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope. Dec 2 04:48:04 localhost systemd[1]: tmp-crun.3gFVUK.mount: Deactivated successfully. Dec 2 04:48:04 localhost systemd[1]: Started libcrun container. Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.120949843 +0000 UTC m=+0.140808541 container init b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7) Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.025558294 +0000 UTC m=+0.045416992 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:48:04 localhost systemd[1]: tmp-crun.YQVXs0.mount: Deactivated successfully. Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.135850379 +0000 UTC m=+0.155709077 container start b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z) Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.136086256 +0000 UTC m=+0.155944994 container attach b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 2 04:48:04 localhost agitated_hofstadter[281602]: 167 167 Dec 2 04:48:04 localhost systemd[1]: libpod-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope: Deactivated successfully. Dec 2 04:48:04 localhost podman[281587]: 2025-12-02 09:48:04.14111395 +0000 UTC m=+0.160972698 container died b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:48:04 localhost podman[281607]: 2025-12-02 09:48:04.253195351 +0000 UTC m=+0.098357942 container remove b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main) Dec 2 04:48:04 localhost systemd[1]: libpod-conmon-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope: Deactivated successfully. Dec 2 04:48:04 localhost podman[281630]: Dec 2 04:48:04 localhost podman[281630]: 2025-12-02 09:48:04.465163588 +0000 UTC m=+0.077076050 container create e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, version=7, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:48:04 localhost podman[281630]: 2025-12-02 09:48:04.433956472 +0000 UTC m=+0.045869004 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:48:04 localhost systemd[1]: Started libpod-conmon-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope. Dec 2 04:48:04 localhost systemd[1]: Started libcrun container. Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost podman[281630]: 2025-12-02 09:48:04.571487942 +0000 UTC m=+0.183400404 container init e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Dec 2 04:48:04 localhost podman[281630]: 2025-12-02 09:48:04.582034675 +0000 UTC m=+0.193947137 container start e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:48:04 localhost podman[281630]: 2025-12-02 09:48:04.582335284 +0000 UTC m=+0.194247746 container attach e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-e498577e0240969a04071b726bc29775f1d342827de0f022348cd7ec1e73f3de-merged.mount: Deactivated successfully. Dec 2 04:48:05 localhost agitated_jones[281645]: [ Dec 2 04:48:05 localhost agitated_jones[281645]: { Dec 2 04:48:05 localhost agitated_jones[281645]: "available": false, Dec 2 04:48:05 localhost agitated_jones[281645]: "ceph_device": false, Dec 2 04:48:05 localhost agitated_jones[281645]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 04:48:05 localhost agitated_jones[281645]: "lsm_data": {}, Dec 2 04:48:05 localhost agitated_jones[281645]: "lvs": [], Dec 2 04:48:05 localhost agitated_jones[281645]: "path": "/dev/sr0", Dec 2 04:48:05 localhost agitated_jones[281645]: "rejected_reasons": [ Dec 2 04:48:05 localhost agitated_jones[281645]: "Has a FileSystem", Dec 2 04:48:05 localhost agitated_jones[281645]: "Insufficient space (<5GB)" Dec 2 04:48:05 localhost agitated_jones[281645]: ], Dec 2 04:48:05 localhost agitated_jones[281645]: "sys_api": { Dec 2 04:48:05 localhost agitated_jones[281645]: "actuators": null, Dec 2 04:48:05 localhost agitated_jones[281645]: "device_nodes": "sr0", Dec 2 04:48:05 localhost agitated_jones[281645]: "human_readable_size": "482.00 KB", Dec 2 04:48:05 localhost agitated_jones[281645]: "id_bus": "ata", Dec 2 04:48:05 localhost agitated_jones[281645]: "model": "QEMU DVD-ROM", Dec 2 04:48:05 localhost agitated_jones[281645]: "nr_requests": "2", Dec 2 04:48:05 localhost agitated_jones[281645]: "partitions": {}, Dec 2 04:48:05 localhost agitated_jones[281645]: "path": "/dev/sr0", Dec 2 04:48:05 localhost agitated_jones[281645]: "removable": "1", Dec 2 04:48:05 localhost agitated_jones[281645]: "rev": "2.5+", Dec 2 04:48:05 localhost agitated_jones[281645]: "ro": "0", Dec 2 04:48:05 localhost agitated_jones[281645]: "rotational": "1", Dec 2 04:48:05 localhost agitated_jones[281645]: "sas_address": "", Dec 2 04:48:05 localhost agitated_jones[281645]: "sas_device_handle": "", Dec 2 04:48:05 localhost agitated_jones[281645]: "scheduler_mode": "mq-deadline", Dec 2 04:48:05 localhost agitated_jones[281645]: "sectors": 0, Dec 2 04:48:05 localhost agitated_jones[281645]: "sectorsize": "2048", Dec 2 04:48:05 localhost agitated_jones[281645]: "size": 493568.0, Dec 2 04:48:05 localhost agitated_jones[281645]: "support_discard": "0", Dec 2 04:48:05 localhost agitated_jones[281645]: "type": "disk", Dec 2 04:48:05 localhost agitated_jones[281645]: "vendor": "QEMU" Dec 2 04:48:05 localhost agitated_jones[281645]: } Dec 2 04:48:05 localhost agitated_jones[281645]: } Dec 2 04:48:05 localhost agitated_jones[281645]: ] Dec 2 04:48:05 localhost systemd[1]: libpod-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope: Deactivated successfully. Dec 2 04:48:05 localhost podman[281630]: 2025-12-02 09:48:05.428814032 +0000 UTC m=+1.040726534 container died e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:48:05 localhost systemd[1]: tmp-crun.NBLHZm.mount: Deactivated successfully. Dec 2 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e-merged.mount: Deactivated successfully. Dec 2 04:48:05 localhost podman[283686]: 2025-12-02 09:48:05.527930196 +0000 UTC m=+0.085363335 container remove e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:48:05 localhost systemd[1]: libpod-conmon-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope: Deactivated successfully. Dec 2 04:48:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:08.320 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:48:08 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:08.322 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 04:48:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:48:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:48:12 localhost podman[283719]: 2025-12-02 09:48:12.068298933 +0000 UTC m=+0.068966582 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:48:12 localhost podman[283719]: 2025-12-02 09:48:12.083119747 +0000 UTC m=+0.083787446 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:48:12 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:48:12 localhost openstack_network_exporter[241816]: ERROR 09:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:12 localhost openstack_network_exporter[241816]: ERROR 09:48:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:48:12 localhost openstack_network_exporter[241816]: ERROR 09:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:12 localhost openstack_network_exporter[241816]: ERROR 09:48:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:48:12 localhost openstack_network_exporter[241816]: Dec 2 04:48:12 localhost openstack_network_exporter[241816]: ERROR 09:48:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:48:12 localhost openstack_network_exporter[241816]: Dec 2 04:48:12 localhost podman[283720]: 2025-12-02 09:48:12.155328097 +0000 UTC m=+0.151474547 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 2 04:48:12 localhost podman[283720]: 2025-12-02 09:48:12.189895615 +0000 UTC m=+0.186042075 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:48:12 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:48:13 localhost ovn_metadata_agent[159477]: 2025-12-02 09:48:13.326 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:48:14 localhost podman[283761]: 2025-12-02 09:48:14.083688977 +0000 UTC m=+0.086444157 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 04:48:14 localhost podman[283761]: 2025-12-02 09:48:14.091762864 +0000 UTC m=+0.094518034 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:48:14 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:48:14 localhost podman[283762]: 2025-12-02 09:48:14.187642118 +0000 UTC m=+0.187393486 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:48:14 localhost podman[283762]: 2025-12-02 09:48:14.262356195 +0000 UTC m=+0.262107623 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:48:14 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56143 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C1CE0000000001030307) Dec 2 04:48:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56144 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C5E20000000001030307) Dec 2 04:48:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26090 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C9220000000001030307) Dec 2 04:48:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56145 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553CDE20000000001030307) Dec 2 04:48:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10178 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553D1220000000001030307) Dec 2 04:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:48:23 localhost systemd[1]: tmp-crun.a7MXOw.mount: Deactivated successfully. Dec 2 04:48:23 localhost podman[283805]: 2025-12-02 09:48:23.077748563 +0000 UTC m=+0.082616780 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41) Dec 2 04:48:23 localhost podman[283805]: 2025-12-02 09:48:23.093891957 +0000 UTC m=+0.098760204 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, architecture=x86_64) Dec 2 04:48:23 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:48:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56146 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553DDA20000000001030307) Dec 2 04:48:23 localhost systemd[1]: tmp-crun.naHGM6.mount: Deactivated successfully. Dec 2 04:48:23 localhost podman[283804]: 2025-12-02 09:48:23.180660362 +0000 UTC m=+0.187311394 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:48:23 localhost podman[283804]: 2025-12-02 09:48:23.186537592 +0000 UTC m=+0.193188634 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:48:23 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:48:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:48:30 localhost podman[283849]: 2025-12-02 09:48:30.064259865 +0000 UTC m=+0.070005484 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:48:30 localhost podman[283849]: 2025-12-02 09:48:30.097957846 +0000 UTC m=+0.103703455 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:48:30 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:48:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56147 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553FD220000000001030307) Dec 2 04:48:33 localhost podman[239757]: time="2025-12-02T09:48:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:48:33 localhost podman[239757]: @ - - [02/Dec/2025:09:48:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:48:33 localhost podman[239757]: @ - - [02/Dec/2025:09:48:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17705 "" "Go-http-client/1.1" Dec 2 04:48:36 localhost nova_compute[281045]: 2025-12-02 09:48:36.961 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:36 localhost nova_compute[281045]: 2025-12-02 09:48:36.988 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:42 localhost openstack_network_exporter[241816]: ERROR 09:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:42 localhost openstack_network_exporter[241816]: ERROR 09:48:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:48:42 localhost openstack_network_exporter[241816]: ERROR 09:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:42 localhost openstack_network_exporter[241816]: ERROR 09:48:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:48:42 localhost openstack_network_exporter[241816]: Dec 2 04:48:42 localhost openstack_network_exporter[241816]: ERROR 09:48:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:48:42 localhost openstack_network_exporter[241816]: Dec 2 04:48:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:48:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:48:43 localhost systemd[1]: tmp-crun.vJOlmn.mount: Deactivated successfully. Dec 2 04:48:43 localhost podman[283868]: 2025-12-02 09:48:43.094616954 +0000 UTC m=+0.100486317 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:48:43 localhost podman[283868]: 2025-12-02 09:48:43.102163365 +0000 UTC m=+0.108032688 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:48:43 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:48:43 localhost podman[283869]: 2025-12-02 09:48:43.188370933 +0000 UTC m=+0.192163762 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:48:43 localhost podman[283869]: 2025-12-02 09:48:43.204813696 +0000 UTC m=+0.208606495 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:48:43 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.530 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.530 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.531 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.531 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.606 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.606 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.608 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.627 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.627 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:48:44 localhost nova_compute[281045]: 2025-12-02 09:48:44.628 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.070 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:48:45 localhost podman[283932]: 2025-12-02 09:48:45.077799887 +0000 UTC m=+0.081842656 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:48:45 localhost podman[283932]: 2025-12-02 09:48:45.11414991 +0000 UTC m=+0.118192659 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 04:48:45 localhost systemd[1]: tmp-crun.4StRXj.mount: Deactivated successfully. Dec 2 04:48:45 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:48:45 localhost podman[283933]: 2025-12-02 09:48:45.137531865 +0000 UTC m=+0.137833870 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0) Dec 2 04:48:45 localhost podman[283933]: 2025-12-02 09:48:45.169772263 +0000 UTC m=+0.170074258 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 04:48:45 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.293 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12462MB free_disk=41.83708190917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.597 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.597 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:48:45 localhost nova_compute[281045]: 2025-12-02 09:48:45.622 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:48:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48169 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55436FE0000000001030307) Dec 2 04:48:46 localhost nova_compute[281045]: 2025-12-02 09:48:46.078 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:48:46 localhost nova_compute[281045]: 2025-12-02 09:48:46.084 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:48:46 localhost nova_compute[281045]: 2025-12-02 09:48:46.108 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:48:46 localhost nova_compute[281045]: 2025-12-02 09:48:46.110 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:48:46 localhost nova_compute[281045]: 2025-12-02 09:48:46.111 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48170 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5543B220000000001030307) Dec 2 04:48:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56148 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5543D220000000001030307) Dec 2 04:48:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48171 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55443230000000001030307) Dec 2 04:48:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55447220000000001030307) Dec 2 04:48:50 localhost sshd[283998]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:48:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48172 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55452E20000000001030307) Dec 2 04:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:48:54 localhost podman[284000]: 2025-12-02 09:48:54.07654885 +0000 UTC m=+0.082573309 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:48:54 localhost podman[284000]: 2025-12-02 09:48:54.083563815 +0000 UTC m=+0.089588264 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:48:54 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:48:54 localhost podman[284001]: 2025-12-02 09:48:54.13206993 +0000 UTC m=+0.132801837 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:48:54 localhost podman[284001]: 2025-12-02 09:48:54.147975377 +0000 UTC m=+0.148707294 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:48:54 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:49:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:49:01 localhost systemd[1]: tmp-crun.XXGVGe.mount: Deactivated successfully. Dec 2 04:49:01 localhost podman[284042]: 2025-12-02 09:49:01.077785863 +0000 UTC m=+0.080675561 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:49:01 localhost podman[284042]: 2025-12-02 09:49:01.111754942 +0000 UTC m=+0.114644590 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:49:01 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:49:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48173 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55473220000000001030307) Dec 2 04:49:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:03 localhost podman[239757]: time="2025-12-02T09:49:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:49:03 localhost podman[239757]: @ - - [02/Dec/2025:09:49:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:49:03 localhost podman[239757]: @ - - [02/Dec/2025:09:49:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17695 "" "Go-http-client/1.1" Dec 2 04:49:12 localhost openstack_network_exporter[241816]: ERROR 09:49:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:49:12 localhost openstack_network_exporter[241816]: ERROR 09:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:12 localhost openstack_network_exporter[241816]: ERROR 09:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:12 localhost openstack_network_exporter[241816]: ERROR 09:49:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:49:12 localhost openstack_network_exporter[241816]: Dec 2 04:49:12 localhost openstack_network_exporter[241816]: ERROR 09:49:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:49:12 localhost openstack_network_exporter[241816]: Dec 2 04:49:13 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 04:49:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:49:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:49:14 localhost podman[284147]: 2025-12-02 09:49:14.071882161 +0000 UTC m=+0.078835075 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:49:14 localhost podman[284147]: 2025-12-02 09:49:14.111805723 +0000 UTC m=+0.118758647 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:49:14 localhost systemd[1]: tmp-crun.v0mQUE.mount: Deactivated successfully. Dec 2 04:49:14 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:49:14 localhost podman[284148]: 2025-12-02 09:49:14.13097667 +0000 UTC m=+0.136870592 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:49:14 localhost podman[284148]: 2025-12-02 09:49:14.144910886 +0000 UTC m=+0.150804788 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125) Dec 2 04:49:14 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:49:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8755 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554AC2E0000000001030307) Dec 2 04:49:16 localhost podman[284189]: 2025-12-02 09:49:16.068070708 +0000 UTC m=+0.074214812 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:49:16 localhost podman[284189]: 2025-12-02 09:49:16.074895008 +0000 UTC m=+0.081039152 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:49:16 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:49:16 localhost podman[284190]: 2025-12-02 09:49:16.137627078 +0000 UTC m=+0.136409588 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:49:16 localhost podman[284190]: 2025-12-02 09:49:16.201965267 +0000 UTC m=+0.200747797 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:49:16 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:49:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8756 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B0230000000001030307) Dec 2 04:49:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48174 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B3230000000001030307) Dec 2 04:49:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8757 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B8220000000001030307) Dec 2 04:49:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56149 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554BB220000000001030307) Dec 2 04:49:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8758 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554C7E20000000001030307) Dec 2 04:49:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:49:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:49:25 localhost podman[284234]: 2025-12-02 09:49:25.064219872 +0000 UTC m=+0.072264003 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:49:25 localhost podman[284235]: 2025-12-02 09:49:25.128018575 +0000 UTC m=+0.131599030 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41) Dec 2 04:49:25 localhost podman[284235]: 2025-12-02 09:49:25.144039705 +0000 UTC m=+0.147620130 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Dec 2 04:49:25 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:49:25 localhost podman[284234]: 2025-12-02 09:49:25.202258838 +0000 UTC m=+0.210302979 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:49:25 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:49:30 localhost ovn_controller[153778]: 2025-12-02T09:49:30Z|00037|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory Dec 2 04:49:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8759 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554E9220000000001030307) Dec 2 04:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:49:32 localhost podman[284277]: 2025-12-02 09:49:32.07015107 +0000 UTC m=+0.071095227 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Dec 2 04:49:32 localhost podman[284277]: 2025-12-02 09:49:32.083913302 +0000 UTC m=+0.084857419 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 04:49:32 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:49:33 localhost podman[239757]: time="2025-12-02T09:49:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:49:33 localhost podman[239757]: @ - - [02/Dec/2025:09:49:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:49:33 localhost podman[239757]: @ - - [02/Dec/2025:09:49:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17708 "" "Go-http-client/1.1" Dec 2 04:49:42 localhost openstack_network_exporter[241816]: ERROR 09:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:42 localhost openstack_network_exporter[241816]: ERROR 09:49:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:49:42 localhost openstack_network_exporter[241816]: ERROR 09:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:42 localhost openstack_network_exporter[241816]: ERROR 09:49:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:49:42 localhost openstack_network_exporter[241816]: Dec 2 04:49:42 localhost openstack_network_exporter[241816]: ERROR 09:49:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:49:42 localhost openstack_network_exporter[241816]: Dec 2 04:49:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:49:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:49:45 localhost systemd[1]: tmp-crun.rm4fFB.mount: Deactivated successfully. Dec 2 04:49:45 localhost podman[284296]: 2025-12-02 09:49:45.120686458 +0000 UTC m=+0.127411942 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:49:45 localhost podman[284297]: 2025-12-02 09:49:45.17596964 +0000 UTC m=+0.139003796 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 2 04:49:45 localhost podman[284297]: 2025-12-02 09:49:45.189891036 +0000 UTC m=+0.152925202 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm) Dec 2 04:49:45 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:49:45 localhost podman[284296]: 2025-12-02 09:49:45.206872236 +0000 UTC m=+0.213597650 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:49:45 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:49:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31533 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555215E0000000001030307) Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.104 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.552 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.552 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.571 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:49:46 localhost nova_compute[281045]: 2025-12-02 09:49:46.573 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:49:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31534 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55525620000000001030307) Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.064 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:47 localhost systemd[1]: tmp-crun.fEMO3O.mount: Deactivated successfully. Dec 2 04:49:47 localhost podman[284359]: 2025-12-02 09:49:47.078654705 +0000 UTC m=+0.086058926 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 2 04:49:47 localhost podman[284359]: 2025-12-02 09:49:47.109495949 +0000 UTC m=+0.116900190 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:49:47 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:49:47 localhost podman[284360]: 2025-12-02 09:49:47.115521263 +0000 UTC m=+0.120797678 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 2 04:49:47 localhost podman[284360]: 2025-12-02 09:49:47.195067279 +0000 UTC m=+0.200343734 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:49:47 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.258 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12466MB free_disk=41.8370246887207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.258 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.302 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.302 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.319 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.781 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.787 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.817 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.818 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:49:47 localhost nova_compute[281045]: 2025-12-02 09:49:47.819 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8760 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55529220000000001030307) Dec 2 04:49:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31535 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5552D620000000001030307) Dec 2 04:49:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48175 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55531220000000001030307) Dec 2 04:49:50 localhost sshd[284425]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:49:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31536 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5553D220000000001030307) Dec 2 04:49:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:49:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:49:56 localhost podman[284427]: 2025-12-02 09:49:56.082807613 +0000 UTC m=+0.088481819 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:49:56 localhost podman[284427]: 2025-12-02 09:49:56.094945194 +0000 UTC m=+0.100619500 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:49:56 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:49:56 localhost systemd[1]: tmp-crun.BA4kce.mount: Deactivated successfully. Dec 2 04:49:56 localhost podman[284428]: 2025-12-02 09:49:56.184496215 +0000 UTC m=+0.187269532 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9) Dec 2 04:49:56 localhost podman[284428]: 2025-12-02 09:49:56.196274637 +0000 UTC m=+0.199047944 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350) Dec 2 04:49:56 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:50:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31537 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5555D220000000001030307) Dec 2 04:50:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:50:03 localhost systemd[1]: tmp-crun.UCJsNd.mount: Deactivated successfully. Dec 2 04:50:03 localhost podman[284470]: 2025-12-02 09:50:03.07274734 +0000 UTC m=+0.079722392 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:50:03 localhost podman[284470]: 2025-12-02 09:50:03.088240994 +0000 UTC m=+0.095215986 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:50:03 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:50:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:50:03.160 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:50:03.160 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:50:03.161 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:03 localhost podman[239757]: time="2025-12-02T09:50:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:50:03 localhost podman[239757]: @ - - [02/Dec/2025:09:50:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:50:03 localhost podman[239757]: @ - - [02/Dec/2025:09:50:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1" Dec 2 04:50:12 localhost openstack_network_exporter[241816]: ERROR 09:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:12 localhost openstack_network_exporter[241816]: ERROR 09:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:12 localhost openstack_network_exporter[241816]: ERROR 09:50:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:50:12 localhost openstack_network_exporter[241816]: ERROR 09:50:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:50:12 localhost openstack_network_exporter[241816]: Dec 2 04:50:12 localhost openstack_network_exporter[241816]: ERROR 09:50:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:50:12 localhost openstack_network_exporter[241816]: Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:50:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:50:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17469 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555968E0000000001030307) Dec 2 04:50:16 localhost podman[284578]: 2025-12-02 09:50:16.090598215 +0000 UTC m=+0.090010287 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:50:16 localhost podman[284577]: 2025-12-02 09:50:16.065506747 +0000 UTC m=+0.068905140 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:50:16 localhost podman[284578]: 2025-12-02 09:50:16.128912048 +0000 UTC m=+0.128324090 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:50:16 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:50:16 localhost podman[284577]: 2025-12-02 09:50:16.153541102 +0000 UTC m=+0.156939545 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:50:16 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:50:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17470 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5559AA20000000001030307) Dec 2 04:50:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31538 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5559D230000000001030307) Dec 2 04:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:50:18 localhost systemd[1]: tmp-crun.X742SO.mount: Deactivated successfully. Dec 2 04:50:18 localhost podman[284620]: 2025-12-02 09:50:18.084645408 +0000 UTC m=+0.086601552 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:50:18 localhost podman[284620]: 2025-12-02 09:50:18.096986505 +0000 UTC m=+0.098942619 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:50:18 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:50:18 localhost podman[284621]: 2025-12-02 09:50:18.18727266 +0000 UTC m=+0.184031315 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller) Dec 2 04:50:18 localhost podman[284621]: 2025-12-02 09:50:18.250096953 +0000 UTC m=+0.246855658 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:50:18 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:50:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17471 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555A2A20000000001030307) Dec 2 04:50:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8761 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555A7220000000001030307) Dec 2 04:50:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17472 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555B2620000000001030307) Dec 2 04:50:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:50:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:50:27 localhost podman[284666]: 2025-12-02 09:50:27.078201701 +0000 UTC m=+0.081288779 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:50:27 localhost podman[284666]: 2025-12-02 09:50:27.083349879 +0000 UTC m=+0.086436967 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:50:27 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:50:27 localhost podman[284667]: 2025-12-02 09:50:27.133488244 +0000 UTC m=+0.135554720 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9) Dec 2 04:50:27 localhost podman[284667]: 2025-12-02 09:50:27.143947794 +0000 UTC m=+0.146014290 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:50:27 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:50:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17473 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555D3220000000001030307) Dec 2 04:50:33 localhost podman[239757]: time="2025-12-02T09:50:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:50:33 localhost podman[239757]: @ - - [02/Dec/2025:09:50:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:50:33 localhost podman[239757]: @ - - [02/Dec/2025:09:50:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1" Dec 2 04:50:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:50:34 localhost podman[284711]: 2025-12-02 09:50:34.078995993 +0000 UTC m=+0.077052970 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:50:34 localhost podman[284711]: 2025-12-02 09:50:34.113890582 +0000 UTC m=+0.111947649 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 2 04:50:34 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:50:34 localhost sshd[284730]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:50:34 localhost systemd-logind[760]: New session 61 of user zuul. Dec 2 04:50:34 localhost systemd[1]: Started Session 61 of User zuul. Dec 2 04:50:35 localhost python3[284752]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:50:35 localhost subscription-manager[284753]: Unregistered machine with identity: 5e8bb4be-b98c-46c0-ac7b-5189dfb48508 Dec 2 04:50:42 localhost openstack_network_exporter[241816]: ERROR 09:50:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:50:42 localhost openstack_network_exporter[241816]: ERROR 09:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:42 localhost openstack_network_exporter[241816]: ERROR 09:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:42 localhost openstack_network_exporter[241816]: ERROR 09:50:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:50:42 localhost openstack_network_exporter[241816]: Dec 2 04:50:42 localhost openstack_network_exporter[241816]: ERROR 09:50:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:50:42 localhost openstack_network_exporter[241816]: Dec 2 04:50:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36388 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5560BBE0000000001030307) Dec 2 04:50:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:50:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:50:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36389 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5560FE20000000001030307) Dec 2 04:50:47 localhost podman[284756]: 2025-12-02 09:50:47.083740319 +0000 UTC m=+0.085534550 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:50:47 localhost podman[284756]: 2025-12-02 09:50:47.099107429 +0000 UTC m=+0.100901640 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:50:47 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:50:47 localhost podman[284755]: 2025-12-02 09:50:47.181935315 +0000 UTC m=+0.185594924 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:50:47 localhost podman[284755]: 2025-12-02 09:50:47.21902921 +0000 UTC m=+0.222688789 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:50:47 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.793 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.793 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.794 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.794 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.812 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.812 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:50:47 localhost nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:50:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17474 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55613220000000001030307) Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.284 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.544 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12504MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.630 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.631 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:50:48 localhost nova_compute[281045]: 2025-12-02 09:50:48.648 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:50:49 localhost systemd[1]: tmp-crun.ttJMoM.mount: Deactivated successfully. Dec 2 04:50:49 localhost podman[284840]: 2025-12-02 09:50:49.073392537 +0000 UTC m=+0.076583465 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:50:49 localhost podman[284839]: 2025-12-02 09:50:49.091612064 +0000 UTC m=+0.093399440 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 04:50:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36390 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55617E20000000001030307) Dec 2 04:50:49 localhost podman[284839]: 2025-12-02 09:50:49.119993954 +0000 UTC m=+0.121781310 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.124 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:50:49 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.133 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:50:49 localhost podman[284840]: 2025-12-02 09:50:49.141344766 +0000 UTC m=+0.144535734 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.153 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:50:49 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.155 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.156 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:49 localhost nova_compute[281045]: 2025-12-02 09:50:49.871 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31539 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5561B220000000001030307) Dec 2 04:50:50 localhost sshd[284884]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:50:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36391 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55627A20000000001030307) Dec 2 04:50:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:50:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:50:58 localhost systemd[1]: tmp-crun.ehOcL5.mount: Deactivated successfully. Dec 2 04:50:58 localhost podman[284886]: 2025-12-02 09:50:58.067189802 +0000 UTC m=+0.073865447 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:50:58 localhost podman[284886]: 2025-12-02 09:50:58.075231188 +0000 UTC m=+0.081906843 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:50:58 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:50:58 localhost podman[284887]: 2025-12-02 09:50:58.124800452 +0000 UTC m=+0.125435963 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible) Dec 2 04:50:58 localhost podman[284887]: 2025-12-02 09:50:58.135586322 +0000 UTC m=+0.136221873 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public) Dec 2 04:50:58 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:51:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36392 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55647220000000001030307) Dec 2 04:51:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:51:03.161 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:51:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:51:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:03 localhost podman[239757]: time="2025-12-02T09:51:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:51:03 localhost podman[239757]: @ - - [02/Dec/2025:09:51:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1" Dec 2 04:51:03 localhost podman[239757]: @ - - [02/Dec/2025:09:51:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17705 "" "Go-http-client/1.1" Dec 2 04:51:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:51:04 localhost podman[285002]: 2025-12-02 09:51:04.722910237 +0000 UTC m=+0.078695665 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:51:04 localhost podman[285002]: 2025-12-02 09:51:04.736884664 +0000 UTC m=+0.092670142 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:51:04 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:51:12 localhost openstack_network_exporter[241816]: ERROR 09:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:12 localhost openstack_network_exporter[241816]: ERROR 09:51:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:51:12 localhost openstack_network_exporter[241816]: ERROR 09:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:12 localhost openstack_network_exporter[241816]: ERROR 09:51:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:51:12 localhost openstack_network_exporter[241816]: Dec 2 04:51:12 localhost openstack_network_exporter[241816]: ERROR 09:51:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:51:12 localhost openstack_network_exporter[241816]: Dec 2 04:51:12 localhost sshd[285055]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:51:12 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 04:51:12 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 04:51:12 localhost systemd-logind[760]: New session 62 of user tripleo-admin. Dec 2 04:51:12 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 04:51:12 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 04:51:12 localhost systemd[285059]: Queued start job for default target Main User Target. Dec 2 04:51:12 localhost systemd[285059]: Created slice User Application Slice. Dec 2 04:51:12 localhost systemd[285059]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:51:12 localhost systemd[285059]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:51:12 localhost systemd[285059]: Reached target Paths. Dec 2 04:51:12 localhost systemd[285059]: Reached target Timers. Dec 2 04:51:12 localhost systemd[285059]: Starting D-Bus User Message Bus Socket... Dec 2 04:51:12 localhost systemd[285059]: Starting Create User's Volatile Files and Directories... Dec 2 04:51:12 localhost systemd[285059]: Finished Create User's Volatile Files and Directories. Dec 2 04:51:12 localhost systemd[285059]: Listening on D-Bus User Message Bus Socket. Dec 2 04:51:12 localhost systemd[285059]: Reached target Sockets. Dec 2 04:51:12 localhost systemd[285059]: Reached target Basic System. Dec 2 04:51:12 localhost systemd[285059]: Reached target Main User Target. Dec 2 04:51:12 localhost systemd[285059]: Startup finished in 155ms. Dec 2 04:51:12 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 04:51:12 localhost systemd[1]: Started Session 62 of User tripleo-admin. Dec 2 04:51:13 localhost python3[285201]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:51:13 localhost systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation. Dec 2 04:51:13 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:51:13 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:51:13 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:51:14 localhost python3[285346]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:51:14 localhost systemd[1]: Stopping Netfilter Tables... Dec 2 04:51:14 localhost systemd[1]: nftables.service: Deactivated successfully. Dec 2 04:51:14 localhost systemd[1]: Stopped Netfilter Tables. Dec 2 04:51:14 localhost systemd[1]: Starting Netfilter Tables... Dec 2 04:51:14 localhost systemd[1]: Finished Netfilter Tables. Dec 2 04:51:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:51:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:51:18 localhost systemd[1]: tmp-crun.9d9aCk.mount: Deactivated successfully. Dec 2 04:51:18 localhost podman[285439]: 2025-12-02 09:51:18.090045515 +0000 UTC m=+0.089890256 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:51:18 localhost podman[285439]: 2025-12-02 09:51:18.124965212 +0000 UTC m=+0.124809993 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 04:51:18 localhost podman[285438]: 2025-12-02 09:51:18.13503789 +0000 UTC m=+0.140947266 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:51:18 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:51:18 localhost podman[285438]: 2025-12-02 09:51:18.141423605 +0000 UTC m=+0.147332961 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:51:18 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:51:20 localhost systemd[1]: tmp-crun.F8cAj2.mount: Deactivated successfully. Dec 2 04:51:20 localhost podman[285518]: 2025-12-02 09:51:20.086373497 +0000 UTC m=+0.081535312 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 04:51:20 localhost systemd[1]: tmp-crun.8XRguH.mount: Deactivated successfully. Dec 2 04:51:20 localhost podman[285517]: 2025-12-02 09:51:20.148746553 +0000 UTC m=+0.146597650 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 04:51:20 localhost podman[285517]: 2025-12-02 09:51:20.15849512 +0000 UTC m=+0.156346217 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Dec 2 04:51:20 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:51:20 localhost podman[285518]: 2025-12-02 09:51:20.201329059 +0000 UTC m=+0.196490864 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:51:20 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:51:25 localhost podman[285710]: Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.713068917 +0000 UTC m=+0.067758480 container create 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True) Dec 2 04:51:25 localhost systemd[1]: Started libpod-conmon-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope. Dec 2 04:51:25 localhost systemd[1]: Started libcrun container. Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.686358821 +0000 UTC m=+0.041048374 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.800803227 +0000 UTC m=+0.155492800 container init 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z) Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.813156125 +0000 UTC m=+0.167845718 container start 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.813784384 +0000 UTC m=+0.168474007 container attach 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z) Dec 2 04:51:25 localhost vigorous_jemison[285725]: 167 167 Dec 2 04:51:25 localhost systemd[1]: libpod-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope: Deactivated successfully. Dec 2 04:51:25 localhost podman[285710]: 2025-12-02 09:51:25.817517678 +0000 UTC m=+0.172207301 container died 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:51:25 localhost podman[285730]: 2025-12-02 09:51:25.904205006 +0000 UTC m=+0.073994731 container remove 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, description=Red Hat Ceph Storage 7) Dec 2 04:51:25 localhost systemd[1]: libpod-conmon-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope: Deactivated successfully. Dec 2 04:51:25 localhost systemd[1]: Reloading. Dec 2 04:51:26 localhost systemd-sysv-generator[285776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:51:26 localhost systemd-rc-local-generator[285772]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: var-lib-containers-storage-overlay-f3ba8f606b49fa1e33413a82c09c7cdfb7eb5c5934a67c8545afa5ff820a4849-merged.mount: Deactivated successfully. Dec 2 04:51:26 localhost systemd[1]: Reloading. Dec 2 04:51:26 localhost systemd-sysv-generator[285817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:51:26 localhost systemd-rc-local-generator[285814]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:26 localhost systemd[1]: Starting Ceph mds.mds.np0005541914.sqgqkj for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:51:27 localhost podman[285877]: Dec 2 04:51:27 localhost podman[285877]: 2025-12-02 09:51:27.104877433 +0000 UTC m=+0.086713320 container create 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, release=1763362218, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:51:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/lib/ceph/mds/ceph-mds.np0005541914.sqgqkj supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:27 localhost podman[285877]: 2025-12-02 09:51:27.068353498 +0000 UTC m=+0.050189445 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:51:27 localhost podman[285877]: 2025-12-02 09:51:27.185997011 +0000 UTC m=+0.167832878 container init 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:51:27 localhost podman[285877]: 2025-12-02 09:51:27.237812594 +0000 UTC m=+0.219648461 container start 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 04:51:27 localhost bash[285877]: 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda Dec 2 04:51:27 localhost systemd[1]: Started Ceph mds.mds.np0005541914.sqgqkj for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:51:27 localhost ceph-mds[285895]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:51:27 localhost ceph-mds[285895]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Dec 2 04:51:27 localhost ceph-mds[285895]: main not setting numa affinity Dec 2 04:51:27 localhost ceph-mds[285895]: pidfile_write: ignore empty --pid-file Dec 2 04:51:27 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj[285891]: starting mds.mds.np0005541914.sqgqkj at Dec 2 04:51:27 localhost ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Updating MDS map to version 6 from mon.0 Dec 2 04:51:27 localhost ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Updating MDS map to version 7 from mon.0 Dec 2 04:51:27 localhost ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Monitors have assigned me to become a standby. Dec 2 04:51:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:51:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:51:29 localhost podman[285915]: 2025-12-02 09:51:29.072042884 +0000 UTC m=+0.073188896 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:51:29 localhost podman[285916]: 2025-12-02 09:51:29.131056407 +0000 UTC m=+0.129011242 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:51:29 localhost podman[285915]: 2025-12-02 09:51:29.161511887 +0000 UTC m=+0.162657859 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:51:29 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:51:29 localhost podman[285916]: 2025-12-02 09:51:29.173336138 +0000 UTC m=+0.171290923 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:51:29 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:51:33 localhost systemd[1]: tmp-crun.JS3a6W.mount: Deactivated successfully. Dec 2 04:51:33 localhost podman[286084]: 2025-12-02 09:51:33.565682891 +0000 UTC m=+0.091147865 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 2 04:51:33 localhost podman[239757]: time="2025-12-02T09:51:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:51:33 localhost podman[239757]: @ - - [02/Dec/2025:09:51:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1" Dec 2 04:51:33 localhost podman[239757]: @ - - [02/Dec/2025:09:51:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18190 "" "Go-http-client/1.1" Dec 2 04:51:33 localhost podman[286084]: 2025-12-02 09:51:33.731770894 +0000 UTC m=+0.257235888 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4) Dec 2 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:51:34 localhost systemd[1]: tmp-crun.Y1EGnD.mount: Deactivated successfully. Dec 2 04:51:34 localhost podman[286183]: 2025-12-02 09:51:34.896990118 +0000 UTC m=+0.097707625 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd) Dec 2 04:51:34 localhost podman[286183]: 2025-12-02 09:51:34.914214724 +0000 UTC m=+0.114932221 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:51:34 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:51:35 localhost systemd[1]: session-61.scope: Deactivated successfully. Dec 2 04:51:35 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Dec 2 04:51:35 localhost systemd-logind[760]: Removed session 61. Dec 2 04:51:42 localhost openstack_network_exporter[241816]: ERROR 09:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:42 localhost openstack_network_exporter[241816]: ERROR 09:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:42 localhost openstack_network_exporter[241816]: ERROR 09:51:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:51:42 localhost openstack_network_exporter[241816]: ERROR 09:51:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:51:42 localhost openstack_network_exporter[241816]: Dec 2 04:51:42 localhost openstack_network_exporter[241816]: ERROR 09:51:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:51:42 localhost openstack_network_exporter[241816]: Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.553 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.553 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:51:46 localhost nova_compute[281045]: 2025-12-02 09:51:46.998 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.211 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.212 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12486MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.212 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.213 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.265 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.266 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.279 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.728 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.734 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.765 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.768 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:51:47 localhost nova_compute[281045]: 2025-12-02 09:51:47.768 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.769 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.805 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.805 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.806 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.807 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.807 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.808 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:48 localhost nova_compute[281045]: 2025-12-02 09:51:48.808 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:51:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:51:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:51:49 localhost systemd[1]: tmp-crun.NLdBox.mount: Deactivated successfully. Dec 2 04:51:49 localhost podman[286266]: 2025-12-02 09:51:49.106241751 +0000 UTC m=+0.104843695 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:51:49 localhost podman[286266]: 2025-12-02 09:51:49.140853288 +0000 UTC m=+0.139455182 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:51:49 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:51:49 localhost podman[286265]: 2025-12-02 09:51:49.191244067 +0000 UTC m=+0.191813761 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:51:49 localhost podman[286265]: 2025-12-02 09:51:49.204918835 +0000 UTC m=+0.205488559 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:51:49 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:51:49 localhost nova_compute[281045]: 2025-12-02 09:51:49.531 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:49 localhost nova_compute[281045]: 2025-12-02 09:51:49.531 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:49 localhost nova_compute[281045]: 2025-12-02 09:51:49.532 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:51:49 localhost nova_compute[281045]: 2025-12-02 09:51:49.532 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:51:49 localhost nova_compute[281045]: 2025-12-02 09:51:49.606 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:51:51 localhost podman[286308]: 2025-12-02 09:51:51.079519318 +0000 UTC m=+0.082680707 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:51:51 localhost podman[286307]: 2025-12-02 09:51:51.125623136 +0000 UTC m=+0.132596201 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:51:51 localhost podman[286307]: 2025-12-02 09:51:51.133791795 +0000 UTC m=+0.140764850 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:51:51 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:51:51 localhost podman[286308]: 2025-12-02 09:51:51.192343374 +0000 UTC m=+0.195504793 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:51:51 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:51:52 localhost sshd[286348]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:51:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:51:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:52:00 localhost systemd[1]: tmp-crun.TUJEyQ.mount: Deactivated successfully. Dec 2 04:52:00 localhost podman[286350]: 2025-12-02 09:52:00.086564447 +0000 UTC m=+0.083719239 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:52:00 localhost podman[286350]: 2025-12-02 09:52:00.094354015 +0000 UTC m=+0.091508767 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:52:00 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:52:00 localhost systemd[1]: tmp-crun.MjrBwk.mount: Deactivated successfully. Dec 2 04:52:00 localhost podman[286351]: 2025-12-02 09:52:00.136259055 +0000 UTC m=+0.131074275 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Dec 2 04:52:00 localhost podman[286351]: 2025-12-02 09:52:00.151690416 +0000 UTC m=+0.146505616 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.) Dec 2 04:52:00 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:52:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:52:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:52:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:52:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:03 localhost podman[239757]: time="2025-12-02T09:52:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:52:03 localhost podman[239757]: @ - - [02/Dec/2025:09:52:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1" Dec 2 04:52:03 localhost podman[239757]: @ - - [02/Dec/2025:09:52:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18194 "" "Go-http-client/1.1" Dec 2 04:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:52:05 localhost podman[286393]: 2025-12-02 09:52:05.088218293 +0000 UTC m=+0.091734373 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:52:05 localhost podman[286393]: 2025-12-02 09:52:05.10288943 +0000 UTC m=+0.106405560 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:52:05 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:52:12 localhost openstack_network_exporter[241816]: ERROR 09:52:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:52:12 localhost openstack_network_exporter[241816]: ERROR 09:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:12 localhost openstack_network_exporter[241816]: ERROR 09:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:12 localhost openstack_network_exporter[241816]: ERROR 09:52:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:52:12 localhost openstack_network_exporter[241816]: Dec 2 04:52:12 localhost openstack_network_exporter[241816]: ERROR 09:52:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:52:12 localhost openstack_network_exporter[241816]: Dec 2 04:52:13 localhost systemd[1]: session-62.scope: Deactivated successfully. Dec 2 04:52:13 localhost systemd[1]: session-62.scope: Consumed 1.250s CPU time. Dec 2 04:52:13 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Dec 2 04:52:13 localhost systemd-logind[760]: Removed session 62. Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:52:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:52:20 localhost podman[286432]: 2025-12-02 09:52:20.076925426 +0000 UTC m=+0.076923930 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:52:20 localhost podman[286432]: 2025-12-02 09:52:20.086911232 +0000 UTC m=+0.086909746 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:52:20 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:52:20 localhost podman[286431]: 2025-12-02 09:52:20.139295681 +0000 UTC m=+0.143075761 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:52:20 localhost podman[286431]: 2025-12-02 09:52:20.176064494 +0000 UTC m=+0.179844634 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:52:20 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:52:22 localhost systemd[1]: tmp-crun.BIGbSC.mount: Deactivated successfully. Dec 2 04:52:22 localhost podman[286471]: 2025-12-02 09:52:22.083767189 +0000 UTC m=+0.085776310 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:52:22 localhost systemd[1]: tmp-crun.b1DZtH.mount: Deactivated successfully. Dec 2 04:52:22 localhost podman[286472]: 2025-12-02 09:52:22.119055797 +0000 UTC m=+0.116524710 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Dec 2 04:52:22 localhost podman[286472]: 2025-12-02 09:52:22.148238629 +0000 UTC m=+0.145707542 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:52:22 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:52:22 localhost podman[286471]: 2025-12-02 09:52:22.171291013 +0000 UTC m=+0.173300144 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:52:22 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:52:23 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 04:52:23 localhost systemd[285059]: Activating special unit Exit the Session... Dec 2 04:52:23 localhost systemd[285059]: Stopped target Main User Target. Dec 2 04:52:23 localhost systemd[285059]: Stopped target Basic System. Dec 2 04:52:23 localhost systemd[285059]: Stopped target Paths. Dec 2 04:52:23 localhost systemd[285059]: Stopped target Sockets. Dec 2 04:52:23 localhost systemd[285059]: Stopped target Timers. Dec 2 04:52:23 localhost systemd[285059]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:52:23 localhost systemd[285059]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:52:23 localhost systemd[285059]: Closed D-Bus User Message Bus Socket. Dec 2 04:52:23 localhost systemd[285059]: Stopped Create User's Volatile Files and Directories. Dec 2 04:52:23 localhost systemd[285059]: Removed slice User Application Slice. Dec 2 04:52:23 localhost systemd[285059]: Reached target Shutdown. Dec 2 04:52:23 localhost systemd[285059]: Finished Exit the Session. Dec 2 04:52:23 localhost systemd[285059]: Reached target Exit the Session. Dec 2 04:52:23 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 04:52:23 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 04:52:23 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 04:52:24 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 04:52:24 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 04:52:24 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 04:52:24 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 04:52:24 localhost systemd[1]: user-1003.slice: Consumed 1.681s CPU time. Dec 2 04:52:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:52:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:52:31 localhost systemd[1]: tmp-crun.56zb2Z.mount: Deactivated successfully. Dec 2 04:52:31 localhost podman[286639]: 2025-12-02 09:52:31.08686126 +0000 UTC m=+0.085530904 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:52:31 localhost podman[286638]: 2025-12-02 09:52:31.124956184 +0000 UTC m=+0.125037741 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:52:31 localhost podman[286638]: 2025-12-02 09:52:31.137986191 +0000 UTC m=+0.138067778 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:52:31 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:52:31 localhost podman[286639]: 2025-12-02 09:52:31.151589247 +0000 UTC m=+0.150258911 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, vcs-type=git, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc.) Dec 2 04:52:31 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:52:33 localhost podman[239757]: time="2025-12-02T09:52:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:52:33 localhost podman[239757]: @ - - [02/Dec/2025:09:52:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1" Dec 2 04:52:33 localhost podman[239757]: @ - - [02/Dec/2025:09:52:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18198 "" "Go-http-client/1.1" Dec 2 04:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:52:36 localhost systemd[1]: tmp-crun.nYXnl6.mount: Deactivated successfully. Dec 2 04:52:36 localhost podman[286680]: 2025-12-02 09:52:36.080362077 +0000 UTC m=+0.083581814 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:52:36 localhost podman[286680]: 2025-12-02 09:52:36.095081007 +0000 UTC m=+0.098300724 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:52:36 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:52:42 localhost openstack_network_exporter[241816]: ERROR 09:52:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:52:42 localhost openstack_network_exporter[241816]: ERROR 09:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:42 localhost openstack_network_exporter[241816]: ERROR 09:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:42 localhost openstack_network_exporter[241816]: ERROR 09:52:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:52:42 localhost openstack_network_exporter[241816]: Dec 2 04:52:42 localhost openstack_network_exporter[241816]: ERROR 09:52:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:52:42 localhost openstack_network_exporter[241816]: Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.549 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.550 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.551 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:52:44 localhost nova_compute[281045]: 2025-12-02 09:52:44.562 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.571 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.593 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.593 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:52:47 localhost nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.043 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.249 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.250 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12491MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.250 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.251 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.337 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.337 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.392 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.448 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.449 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.465 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.487 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.510 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.964 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.969 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.984 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.985 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:52:48 localhost nova_compute[281045]: 2025-12-02 09:52:48.985 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.943 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.943 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.955 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:49 localhost nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:52:50 localhost nova_compute[281045]: 2025-12-02 09:52:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:52:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:52:51 localhost podman[286746]: 2025-12-02 09:52:51.309191463 +0000 UTC m=+0.311469795 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:52:51 localhost podman[286746]: 2025-12-02 09:52:51.319972843 +0000 UTC m=+0.322251125 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:52:51 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:52:51 localhost podman[286747]: 2025-12-02 09:52:51.382866644 +0000 UTC m=+0.383250589 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:52:51 localhost podman[286747]: 2025-12-02 09:52:51.417908004 +0000 UTC m=+0.418291959 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 2 04:52:51 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:52:52 localhost podman[286807]: 2025-12-02 09:52:52.393491836 +0000 UTC m=+0.062204241 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:52:52 localhost podman[286807]: 2025-12-02 09:52:52.452271571 +0000 UTC m=+0.120983976 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 04:52:52 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:52:52 localhost podman[286806]: 2025-12-02 09:52:52.459656117 +0000 UTC m=+0.129750515 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:52:52 localhost podman[286806]: 2025-12-02 09:52:52.540177636 +0000 UTC m=+0.210272034 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent) Dec 2 04:52:52 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:52:57 localhost sshd[286883]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:53:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:53:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:53:01 localhost podman[286936]: 2025-12-02 09:53:01.361090281 +0000 UTC m=+0.080452689 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:53:01 localhost podman[286937]: 2025-12-02 09:53:01.40983321 +0000 UTC m=+0.124808193 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=) Dec 2 04:53:01 localhost podman[286937]: 2025-12-02 09:53:01.423871819 +0000 UTC m=+0.138846802 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:53:01 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:53:01 localhost podman[286936]: 2025-12-02 09:53:01.476978771 +0000 UTC m=+0.196341259 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:53:01 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:53:01 localhost podman[287004]: Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.549211848 +0000 UTC m=+0.068649478 container create 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:53:01 localhost systemd[1]: Started libpod-conmon-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope. Dec 2 04:53:01 localhost systemd[1]: Started libcrun container. Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.608039385 +0000 UTC m=+0.127476995 container init 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.615837893 +0000 UTC m=+0.135275523 container start 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, RELEASE=main) Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.616186354 +0000 UTC m=+0.135624014 container attach 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64) Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.518205911 +0000 UTC m=+0.037643571 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:01 localhost nostalgic_dhawan[287019]: 167 167 Dec 2 04:53:01 localhost systemd[1]: libpod-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope: Deactivated successfully. Dec 2 04:53:01 localhost podman[287004]: 2025-12-02 09:53:01.620147474 +0000 UTC m=+0.139585134 container died 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public) Dec 2 04:53:01 localhost podman[287025]: 2025-12-02 09:53:01.70608803 +0000 UTC m=+0.077322823 container remove 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, release=1763362218, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:01 localhost systemd[1]: libpod-conmon-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope: Deactivated successfully. Dec 2 04:53:01 localhost systemd[1]: Reloading. Dec 2 04:53:01 localhost systemd-rc-local-generator[287065]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:01 localhost systemd-sysv-generator[287070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:01 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: var-lib-containers-storage-overlay-a52f860537c2330b9e76cf281d9d8bae38bd40ea02009faf801d41c688e46c4c-merged.mount: Deactivated successfully. Dec 2 04:53:02 localhost systemd[1]: Reloading. Dec 2 04:53:02 localhost systemd-rc-local-generator[287104]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:02 localhost systemd-sysv-generator[287108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:02 localhost systemd[1]: Starting Ceph mgr.np0005541914.lljzmk for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:53:02 localhost podman[287168]: Dec 2 04:53:02 localhost podman[287168]: 2025-12-02 09:53:02.880956939 +0000 UTC m=+0.074044143 container create 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 2 04:53:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/lib/ceph/mgr/ceph-np0005541914.lljzmk supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:02 localhost podman[287168]: 2025-12-02 09:53:02.942034544 +0000 UTC m=+0.135121758 container init 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, ceph=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:53:02 localhost podman[287168]: 2025-12-02 09:53:02.851317293 +0000 UTC m=+0.044404557 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:02 localhost podman[287168]: 2025-12-02 09:53:02.951109542 +0000 UTC m=+0.144196746 container start 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True) Dec 2 04:53:02 localhost bash[287168]: 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e Dec 2 04:53:02 localhost systemd[1]: Started Ceph mgr.np0005541914.lljzmk for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:53:03 localhost ceph-mgr[287188]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:53:03 localhost ceph-mgr[287188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 2 04:53:03 localhost ceph-mgr[287188]: pidfile_write: ignore empty --pid-file Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Loading python module 'alerts' Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Loading python module 'balancer' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.125+0000 7f5cc20af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:53:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:53:03.164 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:53:03.164 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Loading python module 'cephadm' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.190+0000 7f5cc20af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost podman[239757]: time="2025-12-02T09:53:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:53:03 localhost podman[239757]: @ - - [02/Dec/2025:09:53:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154481 "" "Go-http-client/1.1" Dec 2 04:53:03 localhost podman[239757]: @ - - [02/Dec/2025:09:53:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18680 "" "Go-http-client/1.1" Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Loading python module 'crash' Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[287188]: mgr[py] Loading python module 'dashboard' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.845+0000 7f5cc20af140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'devicehealth' Dec 2 04:53:04 localhost systemd[1]: tmp-crun.F8IFrU.mount: Deactivated successfully. Dec 2 04:53:04 localhost podman[287345]: 2025-12-02 09:53:04.413763582 +0000 UTC m=+0.096754417 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'diskprediction_local' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.427+0000 7f5cc20af140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost podman[287345]: 2025-12-02 09:53:04.530150167 +0000 UTC m=+0.213140982 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container) Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: from numpy import show_config as show_numpy_config Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'influx' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.574+0000 7f5cc20af140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'insights' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.633+0000 7f5cc20af140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'iostat' Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[287188]: mgr[py] Loading python module 'k8sevents' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.750+0000 7f5cc20af140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'localpool' Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'mds_autoscaler' Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'mirroring' Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'nfs' Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'orchestrator' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.537+0000 7f5cc20af140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'osd_perf_query' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.689+0000 7f5cc20af140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.753+0000 7f5cc20af140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'osd_support' Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'pg_autoscaler' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.811+0000 7f5cc20af140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'progress' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.882+0000 7f5cc20af140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[287188]: mgr[py] Loading python module 'prometheus' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.943+0000 7f5cc20af140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rbd_support' Dec 2 04:53:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.269+0000 7f5cc20af140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost podman[287464]: 2025-12-02 09:53:06.291880043 +0000 UTC m=+0.077944343 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 04:53:06 localhost podman[287464]: 2025-12-02 09:53:06.307954754 +0000 UTC m=+0.094019094 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:53:06 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Loading python module 'restful' Dec 2 04:53:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.358+0000 7f5cc20af140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rgw' Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.689+0000 7f5cc20af140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:53:06 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rook' Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'selftest' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.136+0000 7f5cc20af140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'snap_schedule' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.198+0000 7f5cc20af140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'stats' Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'status' Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'telegraf' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.392+0000 7f5cc20af140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'telemetry' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.451+0000 7f5cc20af140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'test_orchestrator' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.584+0000 7f5cc20af140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'volumes' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.727+0000 7f5cc20af140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Loading python module 'zabbix' Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.912+0000 7f5cc20af140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.969+0000 7f5cc20af140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:53:07 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 2 04:53:07 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3096645673 Dec 2 04:53:08 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3096645673 Dec 2 04:53:12 localhost openstack_network_exporter[241816]: ERROR 09:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:12 localhost openstack_network_exporter[241816]: ERROR 09:53:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:53:12 localhost openstack_network_exporter[241816]: ERROR 09:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:12 localhost openstack_network_exporter[241816]: ERROR 09:53:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:53:12 localhost openstack_network_exporter[241816]: Dec 2 04:53:12 localhost openstack_network_exporter[241816]: ERROR 09:53:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:53:12 localhost openstack_network_exporter[241816]: Dec 2 04:53:17 localhost podman[288278]: Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.578812376 +0000 UTC m=+0.077163648 container create 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=) Dec 2 04:53:17 localhost systemd[1]: Started libpod-conmon-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope. Dec 2 04:53:17 localhost systemd[1]: Started libcrun container. Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.545979863 +0000 UTC m=+0.044331135 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.649476134 +0000 UTC m=+0.147827436 container init 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7) Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.661113931 +0000 UTC m=+0.159465213 container start 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, distribution-scope=public, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7) Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.661531743 +0000 UTC m=+0.159883065 container attach 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:53:17 localhost bold_benz[288293]: 167 167 Dec 2 04:53:17 localhost systemd[1]: libpod-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope: Deactivated successfully. Dec 2 04:53:17 localhost podman[288278]: 2025-12-02 09:53:17.664423742 +0000 UTC m=+0.162775034 container died 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True) Dec 2 04:53:17 localhost podman[288300]: 2025-12-02 09:53:17.748602783 +0000 UTC m=+0.072326061 container remove 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, release=1763362218) Dec 2 04:53:17 localhost systemd[1]: libpod-conmon-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope: Deactivated successfully. Dec 2 04:53:17 localhost podman[288317]: Dec 2 04:53:17 localhost podman[288317]: 2025-12-02 09:53:17.847239116 +0000 UTC m=+0.071403713 container create 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, io.openshift.expose-services=, release=1763362218, vcs-type=git, GIT_CLEAN=True) Dec 2 04:53:17 localhost systemd[1]: Started libpod-conmon-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope. Dec 2 04:53:17 localhost systemd[1]: Started libcrun container. Dec 2 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:17 localhost podman[288317]: 2025-12-02 09:53:17.907171216 +0000 UTC m=+0.131335813 container init 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, com.redhat.component=rhceph-container) Dec 2 04:53:17 localhost podman[288317]: 2025-12-02 09:53:17.915705427 +0000 UTC m=+0.139870024 container start 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:53:17 localhost podman[288317]: 2025-12-02 09:53:17.915985656 +0000 UTC m=+0.140150283 container attach 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public) Dec 2 04:53:17 localhost podman[288317]: 2025-12-02 09:53:17.821339165 +0000 UTC m=+0.045503812 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:18 localhost systemd[1]: libpod-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope: Deactivated successfully. Dec 2 04:53:18 localhost podman[288317]: 2025-12-02 09:53:18.012278627 +0000 UTC m=+0.236443254 container died 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:53:18 localhost podman[288358]: 2025-12-02 09:53:18.097396617 +0000 UTC m=+0.073381212 container remove 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 2 04:53:18 localhost systemd[1]: libpod-conmon-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope: Deactivated successfully. Dec 2 04:53:18 localhost systemd[1]: Reloading. Dec 2 04:53:18 localhost systemd-sysv-generator[288402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:18 localhost systemd-rc-local-generator[288396]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: var-lib-containers-storage-overlay-a176400a0e49fa3930f61de145058187d36c22c3614a805fe3876899af335cf8-merged.mount: Deactivated successfully. Dec 2 04:53:18 localhost systemd[1]: Reloading. Dec 2 04:53:18 localhost systemd-rc-local-generator[288439]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:18 localhost systemd-sysv-generator[288444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:18 localhost systemd[1]: Starting Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:53:19 localhost podman[288508]: Dec 2 04:53:19 localhost podman[288508]: 2025-12-02 09:53:19.190702305 +0000 UTC m=+0.058333723 container create 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Dec 2 04:53:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:19 localhost podman[288508]: 2025-12-02 09:53:19.249371096 +0000 UTC m=+0.117002514 container init 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main) Dec 2 04:53:19 localhost podman[288508]: 2025-12-02 09:53:19.258928089 +0000 UTC m=+0.126559517 container start 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:53:19 localhost bash[288508]: 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 Dec 2 04:53:19 localhost podman[288508]: 2025-12-02 09:53:19.161112351 +0000 UTC m=+0.028743769 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:19 localhost systemd[1]: Started Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:53:19 localhost ceph-mon[288526]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:53:19 localhost ceph-mon[288526]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 2 04:53:19 localhost ceph-mon[288526]: pidfile_write: ignore empty --pid-file Dec 2 04:53:19 localhost ceph-mon[288526]: load: jerasure load: lrc Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: RocksDB version: 7.9.2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Git sha 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: DB SUMMARY Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: DB Session ID: ES6HEAUO0NO66H72LGQU Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: CURRENT file: CURRENT Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: IDENTITY file: IDENTITY Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541914/store.db dir, Total Num: 0, files: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541914/store.db: 000004.log size: 761 ; Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.error_if_exists: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.create_if_missing: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.paranoid_checks: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.env: 0x5617097a49e0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.fs: PosixFileSystem Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.info_log: 0x56170abd2d20 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.statistics: (nil) Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.use_fsync: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_log_file_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_fallocate: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.use_direct_reads: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.create_missing_column_families: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.db_log_dir: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.wal_dir: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.advise_random_on_open: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.write_buffer_manager: 0x56170abe3540 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.rate_limiter: (nil) Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.unordered_write: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.row_cache: None Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.wal_filter: None Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.two_write_queues: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.manual_wal_flush: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.wal_compression: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.atomic_flush: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.log_readahead_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.db_host_id: __hostname__ Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_background_jobs: 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_background_compactions: -1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_subcompactions: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_total_wal_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_open_files: -1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bytes_per_sync: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_readahead_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_background_flushes: -1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Compression algorithms supported: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kZSTD supported: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kXpressCompression supported: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kZlibCompression supported: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.merge_operator: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_filter: None Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_filter_factory: None Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.sst_partitioner_factory: None Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56170abd2980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56170abcf350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.write_buffer_size: 33554432 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_write_buffer_number: 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression: NoCompression Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression: Disabled Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.prefix_extractor: nullptr Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.num_levels: 7 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.level: 32767 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.enabled: false Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.arena_block_size: 1048576 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.table_properties_collectors: Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.inplace_update_support: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.bloom_locality: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.max_successive_merges: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.force_consistency_checks: 1 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.ttl: 2592000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enable_blob_files: false Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.min_blob_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_file_size: 268435456 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fef79939-f0d3-4c6e-a3c1-7bf191246dd2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199325356, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199328258, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199328432, "job": 1, "event": "recovery_finished"} Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56170abf6e00 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: DB pointer 0x56170acec000 Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:53:19 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56170abcf350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914 does not exist in monmap, will attempt to join an existing cluster Dec 2 04:53:19 localhost ceph-mon[288526]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] Dec 2 04:53:19 localhost ceph-mon[288526]: starting mon.np0005541914 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541914 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing) e3 sync_obtain_latest_monmap Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3 Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).mds e16 new map Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T08:05:53.424954+0000#012modified#0112025-12-02T09:52:13.505190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26573}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26573 members: 26573#012[mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}] Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mgr to host np0005541912.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mgr to host np0005541913.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mgr to host np0005541914.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Saving service mgr spec with placement label:mgr Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:19 localhost ceph-mon[288526]: Deploying daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:19 localhost ceph-mon[288526]: Deploying daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541909.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541909.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:19 localhost ceph-mon[288526]: Deploying daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541910.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541910.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541911.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541911.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541912.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541912.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541913.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541913.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label mon to host np0005541914.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Added label _admin to host np0005541914.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:19 localhost ceph-mon[288526]: Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:53:19 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:19 localhost ceph-mon[288526]: mon.np0005541914@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Dec 2 04:53:19 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 2 04:53:21 localhost ceph-mon[288526]: mon.np0005541914@-1(probing) e4 my rank is now 3 (was -1) Dec 2 04:53:21 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:53:21 localhost ceph-mon[288526]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 Dec 2 04:53:21 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:53:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:53:22 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Dec 2 04:53:22 localhost systemd[1]: tmp-crun.Fx4rX9.mount: Deactivated successfully. Dec 2 04:53:22 localhost podman[288566]: 2025-12-02 09:53:22.109493295 +0000 UTC m=+0.101682297 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:53:22 localhost podman[288565]: 2025-12-02 09:53:22.07133176 +0000 UTC m=+0.070992491 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:53:22 localhost podman[288565]: 2025-12-02 09:53:22.155997016 +0000 UTC m=+0.155657707 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:53:22 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:53:22 localhost podman[288566]: 2025-12-02 09:53:22.174865643 +0000 UTC m=+0.167054655 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:53:22 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:53:22 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Dec 2 04:53:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:53:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:53:23 localhost podman[288606]: 2025-12-02 09:53:23.074516224 +0000 UTC m=+0.081191841 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:53:23 localhost podman[288606]: 2025-12-02 09:53:23.084005254 +0000 UTC m=+0.090680911 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 04:53:23 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:53:23 localhost systemd[1]: tmp-crun.9iKH8a.mount: Deactivated successfully. Dec 2 04:53:23 localhost podman[288607]: 2025-12-02 09:53:23.186373571 +0000 UTC m=+0.188164178 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:53:23 localhost podman[288607]: 2025-12-02 09:53:23.254603985 +0000 UTC m=+0.256394552 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:53:23 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:24 localhost ceph-mon[288526]: mgrc update_daemon_metadata mon.np0005541914 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541914.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541914.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 2 04:53:24 localhost ceph-mon[288526]: Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541909 calling monitor election Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914 in quorum (ranks 0,1,2,3) Dec 2 04:53:24 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:53:24 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:24 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e4 handle_auth_request failed to assign global_id Dec 2 04:53:25 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:25 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:25 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:25 localhost ceph-mon[288526]: Deploying daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:53:26 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Dec 2 04:53:26 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 2 04:53:26 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:53:26 localhost ceph-mon[288526]: paxos.3).electionLogic(18) init, last seen epoch 18 Dec 2 04:53:26 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:26 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:27 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:27 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:27 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:29 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:31 localhost ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e5 handle_auth_request failed to assign global_id Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541909 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3,4) Dec 2 04:53:31 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:53:31 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:31 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 2 04:53:31 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:53:31 localhost ceph-mon[288526]: paxos.3).electionLogic(22) init, last seen epoch 22 Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:53:32 localhost systemd[1]: tmp-crun.mqs3HQ.mount: Deactivated successfully. Dec 2 04:53:32 localhost podman[288667]: 2025-12-02 09:53:32.091413126 +0000 UTC m=+0.088518135 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:53:32 localhost podman[288667]: 2025-12-02 09:53:32.099911006 +0000 UTC m=+0.097016085 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:53:32 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:53:32 localhost podman[288668]: 2025-12-02 09:53:32.196728693 +0000 UTC m=+0.191521761 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9) Dec 2 04:53:32 localhost podman[288668]: 2025-12-02 09:53:32.21530238 +0000 UTC m=+0.210095438 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:53:32 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:53:33 localhost podman[288819]: 2025-12-02 09:53:33.069735731 +0000 UTC m=+0.088347610 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux ) Dec 2 04:53:33 localhost systemd[1]: tmp-crun.HCbcC8.mount: Deactivated successfully. Dec 2 04:53:33 localhost podman[288819]: 2025-12-02 09:53:33.180776183 +0000 UTC m=+0.199388052 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, vcs-type=git, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main) Dec 2 04:53:33 localhost podman[239757]: time="2025-12-02T09:53:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:53:33 localhost podman[239757]: @ - - [02/Dec/2025:09:53:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:53:33 localhost podman[239757]: @ - - [02/Dec/2025:09:53:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19167 "" "Go-http-client/1.1" Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541909 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:53:36 localhost ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4,5) Dec 2 04:53:36 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:53:36 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:36 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:53:36 localhost systemd[1]: tmp-crun.8Fkl6g.mount: Deactivated successfully. Dec 2 04:53:36 localhost podman[288955]: 2025-12-02 09:53:36.908845464 +0000 UTC m=+0.091717192 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:53:36 localhost podman[288955]: 2025-12-02 09:53:36.924774261 +0000 UTC m=+0.107645989 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:53:36 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.843517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219843620, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10056, "num_deletes": 255, "total_data_size": 10697787, "memory_usage": 10990336, "flush_reason": "Manual Compaction"} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219895795, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 9090844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10061, "table_properties": {"data_size": 9037725, "index_size": 28245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250162, "raw_average_key_size": 26, "raw_value_size": 8876005, "raw_average_value_size": 934, "num_data_blocks": 1084, "num_entries": 9502, "num_filter_entries": 9502, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 1764669199, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52335 microseconds, and 14235 cpu microseconds. Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.895855) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 9090844 bytes OK Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.895880) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897521) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897551) EVENT_LOG_v1 {"time_micros": 1764669219897543, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10628616, prev total WAL file size 10628616, number of live WAL files 2. Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.899777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8877KB) 8(1887B)] Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219899902, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 9092731, "oldest_snapshot_seqno": -1} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9250 keys, 9086935 bytes, temperature: kUnknown Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219967548, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 9086935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9034484, "index_size": 28222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 245355, "raw_average_key_size": 26, "raw_value_size": 8876063, "raw_average_value_size": 959, "num_data_blocks": 1083, "num_entries": 9250, "num_filter_entries": 9250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.967897) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 9086935 bytes Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.970300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.2 rd, 134.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.7, 0.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9507, records dropped: 257 output_compression: NoCompression Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.970329) EVENT_LOG_v1 {"time_micros": 1764669219970317, "job": 4, "event": "compaction_finished", "compaction_time_micros": 67773, "compaction_time_cpu_micros": 30217, "output_level": 6, "num_output_files": 1, "total_output_size": 9086935, "num_input_records": 9507, "num_output_records": 9250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219971751, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219971828, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 2 04:53:39 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.899433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:53:40 localhost ceph-mon[288526]: Reconfiguring mon.np0005541909 (monmap changed)... Dec 2 04:53:40 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541909 on np0005541909.localdomain Dec 2 04:53:40 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:40 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:40 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541909.kfesnk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:41 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541909.kfesnk (monmap changed)... Dec 2 04:53:41 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541909.kfesnk on np0005541909.localdomain Dec 2 04:53:41 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:41 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:41 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541909.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:41 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost openstack_network_exporter[241816]: ERROR 09:53:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:42 localhost openstack_network_exporter[241816]: ERROR 09:53:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:42 localhost openstack_network_exporter[241816]: ERROR 09:53:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:53:42 localhost openstack_network_exporter[241816]: ERROR 09:53:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:53:42 localhost openstack_network_exporter[241816]: Dec 2 04:53:42 localhost openstack_network_exporter[241816]: ERROR 09:53:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:53:42 localhost openstack_network_exporter[241816]: Dec 2 04:53:42 localhost ceph-mon[288526]: Reconfiguring crash.np0005541909 (monmap changed)... Dec 2 04:53:42 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541909 on np0005541909.localdomain Dec 2 04:53:42 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:43 localhost ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:53:43 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:53:43 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:43 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:43 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:44 localhost ceph-mon[288526]: Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:53:44 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:45 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 2 04:53:45 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 2 04:53:45 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 e86: 6 total, 6 up, 6 in Dec 2 04:53:45 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 04:53:45 localhost systemd[1]: session-23.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-21.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-18.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-20.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-24.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-17.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-22.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-14.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-16.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 23. Dec 2 04:53:45 localhost systemd[1]: session-19.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-26.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-26.scope: Consumed 3min 26.753s CPU time. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 21. Dec 2 04:53:45 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 18. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 20. Dec 2 04:53:45 localhost systemd[1]: session-25.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 24. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 17. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 22. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 14. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 16. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 19. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 26. Dec 2 04:53:45 localhost systemd-logind[760]: Removed session 25. Dec 2 04:53:45 localhost sshd[289362]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:53:45 localhost systemd-logind[760]: New session 64 of user ceph-admin. Dec 2 04:53:45 localhost systemd[1]: Started Session 64 of User ceph-admin. Dec 2 04:53:45 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:53:45 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:53:45 localhost ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:53:45 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:53:45 localhost ceph-mon[288526]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:53:45 localhost ceph-mon[288526]: Activating manager daemon np0005541911.adcgiw Dec 2 04:53:45 localhost ceph-mon[288526]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:53:45 localhost ceph-mon[288526]: Manager daemon np0005541911.adcgiw is now available Dec 2 04:53:45 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch Dec 2 04:53:46 localhost podman[289475]: 2025-12-02 09:53:46.763199939 +0000 UTC m=+0.096859680 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, vcs-type=git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218) Dec 2 04:53:46 localhost podman[289475]: 2025-12-02 09:53:46.869722793 +0000 UTC m=+0.203382524 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:53:47 localhost ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Bus STARTING Dec 2 04:53:47 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:47 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Serving on https://172.18.0.105:7150 Dec 2 04:53:48 localhost ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Client ('172.18.0.105', 60410) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:53:48 localhost ceph-mon[288526]: [02/Dec/2025:09:53:47] ENGINE Serving on http://172.18.0.105:8765 Dec 2 04:53:48 localhost ceph-mon[288526]: [02/Dec/2025:09:53:47] ENGINE Bus STARTED Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:53:48 localhost nova_compute[281045]: 2025-12-02 09:53:48.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.004 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.174 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.176 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12021MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.177 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.177 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.236 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.236 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.257 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:53:49 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1019818837 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.684 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.691 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.706 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.709 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:53:49 localhost nova_compute[281045]: 2025-12-02 09:53:49.710 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:50 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.707 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.707 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.724 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.725 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.725 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.737 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.738 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:50 localhost nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:51 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:51 localhost nova_compute[281045]: 2025-12-02 09:53:51.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:51 localhost nova_compute[281045]: 2025-12-02 09:53:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:51 localhost nova_compute[281045]: 2025-12-02 09:53:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:53:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:53:52 localhost systemd[1]: tmp-crun.zqo9r8.mount: Deactivated successfully. Dec 2 04:53:52 localhost podman[290423]: 2025-12-02 09:53:52.471925207 +0000 UTC m=+0.144056613 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:53:52 localhost podman[290423]: 2025-12-02 09:53:52.479769437 +0000 UTC m=+0.151900833 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:53:52 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:52 localhost podman[290422]: 2025-12-02 09:53:52.435214155 +0000 UTC m=+0.110551668 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:53:52 localhost podman[290422]: 2025-12-02 09:53:52.56338216 +0000 UTC m=+0.238719703 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:53:52 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:53:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:53:53 localhost podman[290460]: 2025-12-02 09:53:53.321221901 +0000 UTC m=+0.078316083 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:53:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:53:53 localhost podman[290460]: 2025-12-02 09:53:53.352086134 +0000 UTC m=+0.109180316 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Dec 2 04:53:53 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:53:53 localhost systemd[1]: tmp-crun.G93iCy.mount: Deactivated successfully. Dec 2 04:53:53 localhost podman[290479]: 2025-12-02 09:53:53.433624904 +0000 UTC m=+0.081847641 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 04:53:53 localhost podman[290479]: 2025-12-02 09:53:53.501976732 +0000 UTC m=+0.150199519 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:53:53 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:53:53 localhost ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:53:53 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:53:53 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:53 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:53 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1020050548 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:54 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:53:54 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:53:54 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:54 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:54 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:55 localhost ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:53:55 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:53:55 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:55 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:55 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:55 localhost ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:53:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:55 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:53:55 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:56 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:56 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:56 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:53:56 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:53:56 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:53:58 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:58 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:58 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:53:59 localhost ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054656 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:59 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:53:59 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:00 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:54:00 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:54:00 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:54:00 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:00 localhost ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:00 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:01 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 2 04:54:01 localhost ceph-mon[288526]: mon.np0005541914@3(peon) e7 my rank is now 2 (was 3) Dec 2 04:54:01 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:54:01 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:54:01 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost sshd[290504]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:54:02 localhost podman[290507]: 2025-12-02 09:54:02.556052578 +0000 UTC m=+0.088680109 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:54:02 localhost podman[290506]: 2025-12-02 09:54:02.629134291 +0000 UTC m=+0.162616908 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:54:02 localhost podman[290507]: 2025-12-02 09:54:02.650020559 +0000 UTC m=+0.182648070 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64) Dec 2 04:54:02 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:54:02 localhost podman[290506]: 2025-12-02 09:54:02.665947096 +0000 UTC m=+0.199429753 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:54:02 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:54:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:03 localhost podman[239757]: time="2025-12-02T09:54:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:54:03 localhost podman[239757]: @ - - [02/Dec/2025:09:54:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:54:03 localhost podman[239757]: @ - - [02/Dec/2025:09:54:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1" Dec 2 04:54:03 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:54:03 localhost ceph-mon[288526]: paxos.2).electionLogic(26) init, last seen epoch 26 Dec 2 04:54:03 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:05 localhost sshd[290549]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:54:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:54:07 localhost podman[290550]: 2025-12-02 09:54:07.074676599 +0000 UTC m=+0.081888262 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Dec 2 04:54:07 localhost podman[290550]: 2025-12-02 09:54:07.088976977 +0000 UTC m=+0.096188650 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:54:07 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:54:08 localhost ceph-mon[288526]: paxos.2).electionLogic(27) init, last seen epoch 27, mid-election, bumping Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:54:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:54:08 localhost ceph-mon[288526]: Remove daemons mon.np0005541909 Dec 2 04:54:08 localhost ceph-mon[288526]: Safe to remove mon.np0005541909: new quorum should be ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:54:08 localhost ceph-mon[288526]: Removing monitor np0005541909 from monmap... Dec 2 04:54:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon rm", "name": "np0005541909"} : dispatch Dec 2 04:54:08 localhost ceph-mon[288526]: Removing daemon mon.np0005541909 from np0005541909.localdomain -- ports [] Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541913,np0005541912 in quorum (ranks 0,1,3,4) Dec 2 04:54:08 localhost ceph-mon[288526]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 (MON_DOWN) Dec 2 04:54:08 localhost ceph-mon[288526]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 Dec 2 04:54:08 localhost ceph-mon[288526]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541914 (rank 2) addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] is down (out of quorum) Dec 2 04:54:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:54:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:54:08 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:54:08 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:09 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:54:09 localhost ceph-mon[288526]: Removed label mon from host np0005541909.localdomain Dec 2 04:54:09 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:54:09 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:54:09 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:54:09 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:54:09 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4) Dec 2 04:54:09 localhost ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912) Dec 2 04:54:09 localhost ceph-mon[288526]: Cluster is now healthy Dec 2 04:54:09 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:54:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[288526]: Removed label mgr from host np0005541909.localdomain Dec 2 04:54:10 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:54:10 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:10 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:54:10 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:11 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:54:11 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:54:11 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[288526]: Removed label _admin from host np0005541909.localdomain Dec 2 04:54:11 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:12 localhost openstack_network_exporter[241816]: ERROR 09:54:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:12 localhost openstack_network_exporter[241816]: ERROR 09:54:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:54:12 localhost openstack_network_exporter[241816]: ERROR 09:54:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:54:12 localhost openstack_network_exporter[241816]: Dec 2 04:54:12 localhost openstack_network_exporter[241816]: ERROR 09:54:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:54:12 localhost openstack_network_exporter[241816]: Dec 2 04:54:12 localhost openstack_network_exporter[241816]: ERROR 09:54:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:12 localhost ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:54:12 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:54:12 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:12 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:12 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:13 localhost podman[290622]: Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.084502113 +0000 UTC m=+0.078997874 container create 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7) Dec 2 04:54:13 localhost systemd[1]: Started libpod-conmon-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope. Dec 2 04:54:13 localhost systemd[1]: Started libcrun container. Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.052614488 +0000 UTC m=+0.047110269 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.15613848 +0000 UTC m=+0.150634251 container init 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_CLEAN=True, release=1763362218, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Dec 2 04:54:13 localhost jovial_jepsen[290637]: 167 167 Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.166063514 +0000 UTC m=+0.160559275 container start 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z) Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.167535748 +0000 UTC m=+0.162031559 container attach 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Dec 2 04:54:13 localhost systemd[1]: libpod-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope: Deactivated successfully. Dec 2 04:54:13 localhost podman[290622]: 2025-12-02 09:54:13.171776929 +0000 UTC m=+0.166272660 container died 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True) Dec 2 04:54:13 localhost podman[290642]: 2025-12-02 09:54:13.262724296 +0000 UTC m=+0.079386225 container remove 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main) Dec 2 04:54:13 localhost systemd[1]: libpod-conmon-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope: Deactivated successfully. Dec 2 04:54:13 localhost ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:54:13 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:54:13 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:13 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:13 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:54:13 localhost podman[290712]: Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.920669825 +0000 UTC m=+0.068669918 container create f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git) Dec 2 04:54:13 localhost systemd[1]: Started libpod-conmon-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope. Dec 2 04:54:13 localhost systemd[1]: Started libcrun container. Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.981966997 +0000 UTC m=+0.129967080 container init f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.991473708 +0000 UTC m=+0.139473811 container start f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main) Dec 2 04:54:13 localhost busy_carver[290727]: 167 167 Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.991755037 +0000 UTC m=+0.139755130 container attach f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main) Dec 2 04:54:13 localhost systemd[1]: libpod-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope: Deactivated successfully. Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.994279804 +0000 UTC m=+0.142279927 container died f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:54:13 localhost podman[290712]: 2025-12-02 09:54:13.89498375 +0000 UTC m=+0.042983843 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:14 localhost systemd[1]: var-lib-containers-storage-overlay-b9ab0ceb23e7c8e86a9a8169cd8fdc71fff65f33366517cca8b93fbb82cb94c0-merged.mount: Deactivated successfully. Dec 2 04:54:14 localhost podman[290732]: 2025-12-02 09:54:14.088626825 +0000 UTC m=+0.086922606 container remove f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218) Dec 2 04:54:14 localhost systemd[1]: libpod-conmon-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope: Deactivated successfully. Dec 2 04:54:14 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:14 localhost ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:54:14 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:54:14 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:14 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:14 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:54:14 localhost podman[290807]: Dec 2 04:54:14 localhost podman[290807]: 2025-12-02 09:54:14.925855901 +0000 UTC m=+0.077535100 container create 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1763362218) Dec 2 04:54:14 localhost systemd[1]: Started libpod-conmon-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope. Dec 2 04:54:14 localhost systemd[1]: Started libcrun container. Dec 2 04:54:14 localhost podman[290807]: 2025-12-02 09:54:14.894385339 +0000 UTC m=+0.046064568 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:14 localhost podman[290807]: 2025-12-02 09:54:14.995299502 +0000 UTC m=+0.146978701 container init 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, version=7) Dec 2 04:54:15 localhost nice_euler[290821]: 167 167 Dec 2 04:54:15 localhost podman[290807]: 2025-12-02 09:54:15.004824863 +0000 UTC m=+0.156504042 container start 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:15 localhost systemd[1]: libpod-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope: Deactivated successfully. Dec 2 04:54:15 localhost podman[290807]: 2025-12-02 09:54:15.008385212 +0000 UTC m=+0.160064421 container attach 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, version=7, GIT_CLEAN=True, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=) Dec 2 04:54:15 localhost podman[290807]: 2025-12-02 09:54:15.01159665 +0000 UTC m=+0.163275919 container died 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4) Dec 2 04:54:15 localhost systemd[1]: var-lib-containers-storage-overlay-c7163d93bb4bf6f371b0e4c4358728f4668ecc9afbd8d8214956e22beed7c7e2-merged.mount: Deactivated successfully. Dec 2 04:54:15 localhost podman[290826]: 2025-12-02 09:54:15.111056348 +0000 UTC m=+0.088620618 container remove 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, name=rhceph, RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:15 localhost systemd[1]: libpod-conmon-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope: Deactivated successfully. Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:15 localhost ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:54:15 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:54:15 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:15 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:15 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:15 localhost podman[290903]: Dec 2 04:54:15 localhost podman[290903]: 2025-12-02 09:54:15.944766045 +0000 UTC m=+0.076565030 container create 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 04:54:15 localhost systemd[1]: Started libpod-conmon-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope. Dec 2 04:54:15 localhost systemd[1]: Started libcrun container. Dec 2 04:54:16 localhost podman[290903]: 2025-12-02 09:54:16.01169449 +0000 UTC m=+0.143493465 container init 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, release=1763362218, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:54:16 localhost podman[290903]: 2025-12-02 09:54:15.913670856 +0000 UTC m=+0.045469901 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:16 localhost podman[290903]: 2025-12-02 09:54:16.018604541 +0000 UTC m=+0.150403516 container start 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , version=7) Dec 2 04:54:16 localhost podman[290903]: 2025-12-02 09:54:16.019051754 +0000 UTC m=+0.150850739 container attach 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph) Dec 2 04:54:16 localhost cool_yonath[290918]: 167 167 Dec 2 04:54:16 localhost systemd[1]: libpod-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope: Deactivated successfully. Dec 2 04:54:16 localhost podman[290903]: 2025-12-02 09:54:16.022848611 +0000 UTC m=+0.154647616 container died 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, architecture=x86_64) Dec 2 04:54:16 localhost systemd[1]: var-lib-containers-storage-overlay-18259d2b484ac379674b8964c1b8161277c44a0b1fd19fa51ffcfda950bba7e7-merged.mount: Deactivated successfully. Dec 2 04:54:16 localhost podman[290923]: 2025-12-02 09:54:16.098640536 +0000 UTC m=+0.071895078 container remove 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:54:16 localhost systemd[1]: libpod-conmon-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope: Deactivated successfully. Dec 2 04:54:16 localhost podman[290990]: Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.771879591 +0000 UTC m=+0.060672555 container create 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1763362218) Dec 2 04:54:16 localhost systemd[1]: Started libpod-conmon-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope. Dec 2 04:54:16 localhost systemd[1]: Started libcrun container. Dec 2 04:54:16 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:54:16 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:54:16 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:16 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:16 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.83009427 +0000 UTC m=+0.118887234 container init 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, ceph=True, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.743566016 +0000 UTC m=+0.032359030 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.84384435 +0000 UTC m=+0.132637324 container start 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.844088157 +0000 UTC m=+0.132881131 container attach 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main) Dec 2 04:54:16 localhost jovial_faraday[291005]: 167 167 Dec 2 04:54:16 localhost systemd[1]: libpod-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope: Deactivated successfully. Dec 2 04:54:16 localhost podman[290990]: 2025-12-02 09:54:16.84679552 +0000 UTC m=+0.135588484 container died 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Dec 2 04:54:16 localhost podman[291010]: 2025-12-02 09:54:16.923987447 +0000 UTC m=+0.069050919 container remove 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:16 localhost systemd[1]: libpod-conmon-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope: Deactivated successfully. Dec 2 04:54:17 localhost systemd[1]: tmp-crun.CugJDd.mount: Deactivated successfully. Dec 2 04:54:17 localhost systemd[1]: var-lib-containers-storage-overlay-63d33f8bf0bcc57917b521a9eb24896aaf5cfeff03484b73cf77571d7ef36d56-merged.mount: Deactivated successfully. Dec 2 04:54:17 localhost podman[291079]: Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.617656477 +0000 UTC m=+0.071541217 container create e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:54:17 localhost systemd[1]: Started libpod-conmon-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope. Dec 2 04:54:17 localhost systemd[1]: Started libcrun container. Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.679319691 +0000 UTC m=+0.133204431 container init e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.589396264 +0000 UTC m=+0.043281084 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:17 localhost systemd[1]: tmp-crun.9UgLo4.mount: Deactivated successfully. Dec 2 04:54:17 localhost vigilant_northcutt[291094]: 167 167 Dec 2 04:54:17 localhost systemd[1]: libpod-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope: Deactivated successfully. Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.695261548 +0000 UTC m=+0.149146288 container start e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7) Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.695883997 +0000 UTC m=+0.149768737 container attach e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:17 localhost podman[291079]: 2025-12-02 09:54:17.698477856 +0000 UTC m=+0.152362646 container died e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Dec 2 04:54:17 localhost podman[291099]: 2025-12-02 09:54:17.772964431 +0000 UTC m=+0.067949726 container remove e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:54:17 localhost systemd[1]: libpod-conmon-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope: Deactivated successfully. Dec 2 04:54:17 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:54:17 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:54:17 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:17 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:17 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:18 localhost systemd[1]: var-lib-containers-storage-overlay-57a2763eff75fdd2a6aa47d39a665171b782e47ff990e71b1865281b27311483-merged.mount: Deactivated successfully. Dec 2 04:54:18 localhost ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:54:18 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:54:18 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:18 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:19 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:20 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:20 localhost ceph-mon[288526]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[288526]: Removing np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:54:20 localhost ceph-mon[288526]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:54:20 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[288526]: Removing daemon mgr.np0005541909.kfesnk from np0005541909.localdomain -- ports [9283, 8765] Dec 2 04:54:22 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:22 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:54:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:54:23 localhost podman[291436]: 2025-12-02 09:54:23.094730937 +0000 UTC m=+0.092843877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:54:23 localhost systemd[1]: tmp-crun.qNhU6v.mount: Deactivated successfully. Dec 2 04:54:23 localhost podman[291437]: 2025-12-02 09:54:23.135024617 +0000 UTC m=+0.133781257 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 2 04:54:23 localhost podman[291436]: 2025-12-02 09:54:23.16094396 +0000 UTC m=+0.159056930 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:23 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:54:23 localhost podman[291437]: 2025-12-02 09:54:23.175907117 +0000 UTC m=+0.174663757 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Dec 2 04:54:23 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:54:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:54:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:54:23 localhost podman[291496]: 2025-12-02 09:54:23.536772551 +0000 UTC m=+0.101515772 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:54:23 localhost podman[291496]: 2025-12-02 09:54:23.573093851 +0000 UTC m=+0.137837002 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:54:23 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:54:23 localhost podman[291514]: 2025-12-02 09:54:23.616602539 +0000 UTC m=+0.068826983 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:54:23 localhost podman[291514]: 2025-12-02 09:54:23.648788623 +0000 UTC m=+0.101013087 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:54:23 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:54:23 localhost ceph-mon[288526]: Added label _no_schedule to host np0005541909.localdomain Dec 2 04:54:23 localhost ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541909.localdomain Dec 2 04:54:23 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"} : dispatch Dec 2 04:54:23 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"}]': finished Dec 2 04:54:23 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:23 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost systemd[1]: tmp-crun.EzEoFd.mount: Deactivated successfully. Dec 2 04:54:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:24 localhost ceph-mon[288526]: Removing key for mgr.np0005541909.kfesnk Dec 2 04:54:24 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:24 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[288526]: Removing daemon crash.np0005541909 from np0005541909.localdomain -- ports [] Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"} : dispatch Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"}]': finished Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"} : dispatch Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"}]': finished Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:25 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[288526]: Removed host np0005541909.localdomain Dec 2 04:54:26 localhost ceph-mon[288526]: Removing key for client.crash.np0005541909.localdomain Dec 2 04:54:26 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:26 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:27 localhost ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:54:27 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:54:27 localhost ceph-mon[288526]: Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:54:27 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:54:27 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:27 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:27 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:28 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:54:28 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:54:28 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:29 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:54:29 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:29 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:54:29 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:54:29 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:29 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:54:31 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:54:31 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:31 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:32 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:54:33 localhost podman[291575]: 2025-12-02 09:54:33.074813392 +0000 UTC m=+0.078009444 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:54:33 localhost podman[291575]: 2025-12-02 09:54:33.085944663 +0000 UTC m=+0.089140665 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:54:33 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:54:33 localhost podman[291576]: 2025-12-02 09:54:33.129506273 +0000 UTC m=+0.130603690 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git) Dec 2 04:54:33 localhost podman[291576]: 2025-12-02 09:54:33.164944486 +0000 UTC m=+0.166041923 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Dec 2 04:54:33 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:54:33 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:54:33 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:54:33 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:54:33 localhost podman[239757]: time="2025-12-02T09:54:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:54:33 localhost podman[239757]: @ - - [02/Dec/2025:09:54:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:54:33 localhost podman[239757]: @ - - [02/Dec/2025:09:54:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1" Dec 2 04:54:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:34 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:54:34 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:54:34 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:54:34 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:34 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:34 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:35 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:54:35 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:54:35 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:35 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:35 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:35 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:35 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:54:35 localhost ceph-mon[288526]: paxos.2).electionLogic(32) init, last seen epoch 32 Dec 2 04:54:35 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:35 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:54:38 localhost podman[291618]: 2025-12-02 09:54:38.070228699 +0000 UTC m=+0.078091387 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:54:38 localhost podman[291618]: 2025-12-02 09:54:38.107101005 +0000 UTC m=+0.114963683 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:54:38 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:54:40 localhost ceph-mon[288526]: paxos.2).electionLogic(35) init, last seen epoch 35, mid-election, bumping Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 handle_timecheck drop unexpected msg Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: Health check failed: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914 (MON_DOWN) Dec 2 04:54:41 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:54:41 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3) Dec 2 04:54:41 localhost ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914) Dec 2 04:54:41 localhost ceph-mon[288526]: Cluster is now healthy Dec 2 04:54:41 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:41 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:54:42 localhost openstack_network_exporter[241816]: ERROR 09:54:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:54:42 localhost openstack_network_exporter[241816]: ERROR 09:54:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:42 localhost openstack_network_exporter[241816]: ERROR 09:54:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:42 localhost openstack_network_exporter[241816]: ERROR 09:54:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:54:42 localhost openstack_network_exporter[241816]: Dec 2 04:54:42 localhost openstack_network_exporter[241816]: ERROR 09:54:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:54:42 localhost openstack_network_exporter[241816]: Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.638083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282638604, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 3141, "num_deletes": 517, "total_data_size": 9043259, "memory_usage": 9615240, "flush_reason": "Manual Compaction"} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282679260, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5540365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10066, "largest_seqno": 13202, "table_properties": {"data_size": 5527476, "index_size": 7666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4165, "raw_key_size": 35681, "raw_average_key_size": 21, "raw_value_size": 5497233, "raw_average_value_size": 3327, "num_data_blocks": 331, "num_entries": 1652, "num_filter_entries": 1652, "num_deletions": 516, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669219, "oldest_key_time": 1764669219, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 41221 microseconds, and 14932 cpu microseconds. Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.679310) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5540365 bytes OK Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.679335) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681167) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681183) EVENT_LOG_v1 {"time_micros": 1764669282681179, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681200) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9027079, prev total WAL file size 9075868, number of live WAL files 2. Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5410KB)], [15(8873KB)] Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282682625, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14627300, "oldest_snapshot_seqno": -1} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9820 keys, 12520093 bytes, temperature: kUnknown Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282761768, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12520093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12463332, "index_size": 31124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 262018, "raw_average_key_size": 26, "raw_value_size": 12294228, "raw_average_value_size": 1251, "num_data_blocks": 1189, "num_entries": 9820, "num_filter_entries": 9820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.762019) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12520093 bytes Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.764322) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.6 rd, 158.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.3, 8.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(4.9) write-amplify(2.3) OK, records in: 10902, records dropped: 1082 output_compression: NoCompression Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.764343) EVENT_LOG_v1 {"time_micros": 1764669282764333, "job": 6, "event": "compaction_finished", "compaction_time_micros": 79224, "compaction_time_cpu_micros": 25715, "output_level": 6, "num_output_files": 1, "total_output_size": 12520093, "num_input_records": 10902, "num_output_records": 9820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282764960, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282765807, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:54:42 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:54:42 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:42 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:42 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:54:43 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:54:43 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:54:43 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:43 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:43 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:44 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:44 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:54:44 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:54:44 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:44 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:44 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:45 localhost podman[291691]: Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.708605458 +0000 UTC m=+0.074176536 container create 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Dec 2 04:54:45 localhost systemd[1]: Started libpod-conmon-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope. Dec 2 04:54:45 localhost systemd[1]: Started libcrun container. Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.677104266 +0000 UTC m=+0.042675394 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.786139757 +0000 UTC m=+0.151710835 container init 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:54:45 localhost systemd[1]: tmp-crun.aiMjCk.mount: Deactivated successfully. Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.800864326 +0000 UTC m=+0.166435404 container start 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.801489996 +0000 UTC m=+0.167061084 container attach 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Dec 2 04:54:45 localhost hungry_sanderson[291707]: 167 167 Dec 2 04:54:45 localhost systemd[1]: libpod-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope: Deactivated successfully. Dec 2 04:54:45 localhost podman[291691]: 2025-12-02 09:54:45.806270662 +0000 UTC m=+0.171841750 container died 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:45 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:54:45 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:54:45 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:45 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:45 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:45 localhost podman[291712]: 2025-12-02 09:54:45.887583136 +0000 UTC m=+0.072443755 container remove 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 04:54:45 localhost systemd[1]: libpod-conmon-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope: Deactivated successfully. Dec 2 04:54:46 localhost podman[291780]: Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.571371893 +0000 UTC m=+0.061561651 container create 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 2 04:54:46 localhost systemd[1]: Started libpod-conmon-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope. Dec 2 04:54:46 localhost systemd[1]: Started libcrun container. Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.633477891 +0000 UTC m=+0.123667569 container init 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, name=rhceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.643209828 +0000 UTC m=+0.133399506 container start 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.643478876 +0000 UTC m=+0.133668584 container attach 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, GIT_CLEAN=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, version=7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:46 localhost objective_shockley[291796]: 167 167 Dec 2 04:54:46 localhost systemd[1]: libpod-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope: Deactivated successfully. Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.644925461 +0000 UTC m=+0.135115199 container died 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 04:54:46 localhost podman[291780]: 2025-12-02 09:54:46.54766805 +0000 UTC m=+0.037857758 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:46 localhost systemd[1]: var-lib-containers-storage-overlay-ac5900ab8ffae1918adf6b8e816c3760e45171d74d4508e28545bac0e0560d3b-merged.mount: Deactivated successfully. Dec 2 04:54:46 localhost systemd[1]: tmp-crun.yMFp4C.mount: Deactivated successfully. Dec 2 04:54:46 localhost systemd[1]: var-lib-containers-storage-overlay-c2e651cc5af721df3b5c00d846ce741cc00f245d0bc892a81f8435ea41956d06-merged.mount: Deactivated successfully. Dec 2 04:54:46 localhost podman[291801]: 2025-12-02 09:54:46.748707541 +0000 UTC m=+0.089572277 container remove 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, ceph=True, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Dec 2 04:54:46 localhost systemd[1]: libpod-conmon-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope: Deactivated successfully. Dec 2 04:54:46 localhost ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:54:46 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:54:46 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:46 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:46 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:54:47 localhost podman[291876]: Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.535328739 +0000 UTC m=+0.059768126 container create 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph) Dec 2 04:54:47 localhost systemd[1]: Started libpod-conmon-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope. Dec 2 04:54:47 localhost systemd[1]: Started libcrun container. Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.602395068 +0000 UTC m=+0.126834455 container init 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main) Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.505387935 +0000 UTC m=+0.029827352 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.611769375 +0000 UTC m=+0.136208762 container start 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=) Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.611956171 +0000 UTC m=+0.136395558 container attach 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main) Dec 2 04:54:47 localhost intelligent_euler[291891]: 167 167 Dec 2 04:54:47 localhost systemd[1]: libpod-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope: Deactivated successfully. Dec 2 04:54:47 localhost podman[291876]: 2025-12-02 09:54:47.614518619 +0000 UTC m=+0.138958006 container died 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 2 04:54:47 localhost podman[291896]: 2025-12-02 09:54:47.704223799 +0000 UTC m=+0.082148861 container remove 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Dec 2 04:54:47 localhost systemd[1]: libpod-conmon-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope: Deactivated successfully. Dec 2 04:54:47 localhost systemd[1]: var-lib-containers-storage-overlay-dd30a025531b0c3b4a7362020c544394b19a7cbea7909b5c730e8590ebc0dc7b-merged.mount: Deactivated successfully. Dec 2 04:54:47 localhost ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:54:47 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:54:47 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:47 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:47 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:54:48 localhost podman[291973]: Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.497863822 +0000 UTC m=+0.055375742 container create d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218) Dec 2 04:54:48 localhost systemd[1]: Started libpod-conmon-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope. Dec 2 04:54:48 localhost systemd[1]: Started libcrun container. Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.536481052 +0000 UTC m=+0.093992972 container init d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, vendor=Red Hat, Inc., name=rhceph, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64) Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.544757125 +0000 UTC m=+0.102269045 container start d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, version=7, RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.545534889 +0000 UTC m=+0.103047029 container attach d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:48 localhost happy_spence[291988]: 167 167 Dec 2 04:54:48 localhost systemd[1]: libpod-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope: Deactivated successfully. Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.547333134 +0000 UTC m=+0.104845074 container died d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z) Dec 2 04:54:48 localhost podman[291973]: 2025-12-02 09:54:48.474118387 +0000 UTC m=+0.031630317 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:48 localhost podman[291993]: 2025-12-02 09:54:48.618530648 +0000 UTC m=+0.062648474 container remove d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Dec 2 04:54:48 localhost systemd[1]: libpod-conmon-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope: Deactivated successfully. Dec 2 04:54:48 localhost systemd[1]: tmp-crun.Jz6Zw7.mount: Deactivated successfully. Dec 2 04:54:48 localhost systemd[1]: var-lib-containers-storage-overlay-2ce7e79137ded3fe60f33574aa99549fece0636d032717af0b44d503b046d447-merged.mount: Deactivated successfully. Dec 2 04:54:48 localhost ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:54:48 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.316002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289316048, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 520, "num_deletes": 256, "total_data_size": 621723, "memory_usage": 632584, "flush_reason": "Manual Compaction"} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289321170, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 358219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13207, "largest_seqno": 13722, "table_properties": {"data_size": 355267, "index_size": 935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7391, "raw_average_key_size": 19, "raw_value_size": 349101, "raw_average_value_size": 921, "num_data_blocks": 39, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669282, "oldest_key_time": 1764669282, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5233 microseconds, and 1960 cpu microseconds. Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.321238) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 358219 bytes OK Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.321265) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323346) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323389) EVENT_LOG_v1 {"time_micros": 1764669289323378, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 618499, prev total WAL file size 618823, number of live WAL files 2. Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.325169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end) Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(349KB)], [18(11MB)] Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289325227, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 12878312, "oldest_snapshot_seqno": -1} Dec 2 04:54:49 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9669 keys, 12768818 bytes, temperature: kUnknown Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289403048, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 12768818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12712041, "index_size": 31524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 260025, "raw_average_key_size": 26, "raw_value_size": 12544563, "raw_average_value_size": 1297, "num_data_blocks": 1204, "num_entries": 9669, "num_filter_entries": 9669, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:49 localhost podman[292063]: Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.403439) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 12768818 bytes Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.405679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.3 rd, 163.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(71.6) write-amplify(35.6) OK, records in: 10199, records dropped: 530 output_compression: NoCompression Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.405719) EVENT_LOG_v1 {"time_micros": 1764669289405703, "job": 8, "event": "compaction_finished", "compaction_time_micros": 77926, "compaction_time_cpu_micros": 41575, "output_level": 6, "num_output_files": 1, "total_output_size": 12768818, "num_input_records": 10199, "num_output_records": 9669, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289406079, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289408841, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.325069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.419523576 +0000 UTC m=+0.074724454 container create e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:49 localhost systemd[1]: Started libpod-conmon-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope. Dec 2 04:54:49 localhost systemd[1]: Started libcrun container. Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.387804498 +0000 UTC m=+0.043005366 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.489156074 +0000 UTC m=+0.144356902 container init e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=) Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.499475439 +0000 UTC m=+0.154676267 container start e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, version=7, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.499779398 +0000 UTC m=+0.154980226 container attach e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=) Dec 2 04:54:49 localhost zealous_carver[292078]: 167 167 Dec 2 04:54:49 localhost systemd[1]: libpod-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope: Deactivated successfully. Dec 2 04:54:49 localhost podman[292063]: 2025-12-02 09:54:49.503032498 +0000 UTC m=+0.158233356 container died e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:49 localhost podman[292083]: 2025-12-02 09:54:49.603261749 +0000 UTC m=+0.087982109 container remove e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, version=7, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:54:49 localhost systemd[1]: libpod-conmon-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope: Deactivated successfully. Dec 2 04:54:49 localhost systemd[1]: var-lib-containers-storage-overlay-2d608890f6997853bd8310342bf75301ea75477298cb74acfb311ca4af0b73b7-merged.mount: Deactivated successfully. Dec 2 04:54:49 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:54:49 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:54:49 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:54:49 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:54:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:49 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.680 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:54:50 localhost nova_compute[281045]: 2025-12-02 09:54:50.680 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.124 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.273 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12050MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.351 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.352 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.369 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:54:51 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:54:51 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3012433288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.813 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.820 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.840 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.843 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:54:51 localhost nova_compute[281045]: 2025-12-02 09:54:51.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:51 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:51 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:51 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:51 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.981 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.982 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.983 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:52 localhost nova_compute[281045]: 2025-12-02 09:54:52.983 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:53 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:53 localhost ceph-mon[288526]: Deploying daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:54:53 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:54:53 localhost systemd[1]: tmp-crun.NrjGrB.mount: Deactivated successfully. Dec 2 04:54:53 localhost podman[292550]: 2025-12-02 09:54:53.383786143 +0000 UTC m=+0.078522060 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 04:54:53 localhost podman[292549]: 2025-12-02 09:54:53.409997564 +0000 UTC m=+0.104380340 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:53 localhost podman[292549]: 2025-12-02 09:54:53.444812007 +0000 UTC m=+0.139194773 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:53 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:54:53 localhost podman[292550]: 2025-12-02 09:54:53.469945395 +0000 UTC m=+0.164681262 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Dec 2 04:54:53 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:54:53 localhost nova_compute[281045]: 2025-12-02 09:54:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:54:54 localhost podman[292591]: 2025-12-02 09:54:54.082478497 +0000 UTC m=+0.083803271 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:54:54 localhost podman[292591]: 2025-12-02 09:54:54.115844556 +0000 UTC m=+0.117169350 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:54:54 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:54:54 localhost podman[292592]: 2025-12-02 09:54:54.125538672 +0000 UTC m=+0.123232416 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 2 04:54:54 localhost podman[292592]: 2025-12-02 09:54:54.267968273 +0000 UTC m=+0.265662057 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller) Dec 2 04:54:54 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:54:54 localhost ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:54:54 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:54 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:54:54 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:54 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:54 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:54:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:55 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:54:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:55 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:55 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:54:55 localhost ceph-mon[288526]: paxos.2).electionLogic(38) init, last seen epoch 38 Dec 2 04:54:55 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:55 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:55 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:00 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:00 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3) Dec 2 04:55:00 localhost ceph-mon[288526]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 (MON_DOWN) Dec 2 04:55:00 localhost ceph-mon[288526]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 Dec 2 04:55:00 localhost ceph-mon[288526]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 Dec 2 04:55:00 localhost ceph-mon[288526]: mon.np0005541912 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Dec 2 04:55:00 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:00 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:01 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:01 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:01 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:02 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:55:02 localhost ceph-mon[288526]: paxos.2).electionLogic(40) init, last seen epoch 40 Dec 2 04:55:02 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:02 localhost ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:02 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:55:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:55:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:55:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:03 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541910 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:55:03 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4) Dec 2 04:55:03 localhost ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913) Dec 2 04:55:03 localhost ceph-mon[288526]: Cluster is now healthy Dec 2 04:55:03 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:55:03 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:03 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:03 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:03 localhost podman[239757]: time="2025-12-02T09:55:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:55:03 localhost podman[239757]: @ - - [02/Dec/2025:09:55:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:55:03 localhost podman[239757]: @ - - [02/Dec/2025:09:55:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19170 "" "Go-http-client/1.1" Dec 2 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:55:04 localhost podman[292635]: 2025-12-02 09:55:04.116179098 +0000 UTC m=+0.108431543 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible) Dec 2 04:55:04 localhost podman[292634]: 2025-12-02 09:55:04.078022713 +0000 UTC m=+0.077373315 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:55:04 localhost podman[292634]: 2025-12-02 09:55:04.159892354 +0000 UTC m=+0.159242956 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:55:04 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:55:04 localhost podman[292635]: 2025-12-02 09:55:04.181092961 +0000 UTC m=+0.173345456 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible) Dec 2 04:55:04 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:55:04 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:04 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:04 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:04 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:04 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:04 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:05 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:05 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:05 localhost sshd[292675]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:55:06 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:06 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:06 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:06 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:06 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:06 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:06 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:55:07 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:07 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:07 localhost ceph-mon[288526]: Reconfig service osd.default_drive_group Dec 2 04:55:07 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:07 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:07 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:55:08 localhost podman[292677]: 2025-12-02 09:55:08.536281769 +0000 UTC m=+0.053546866 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:55:08 localhost podman[292677]: 2025-12-02 09:55:08.54872117 +0000 UTC m=+0.065986227 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd) Dec 2 04:55:08 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:55:08 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 e87: 6 total, 6 up, 6 in Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr handle_mgr_map Activating! Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr handle_mgr_map I am now activating Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541910"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 1 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch Dec 2 04:55:09 localhost ceph-mgr[287188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: balancer Dec 2 04:55:09 localhost ceph-mgr[287188]: [balancer INFO root] Starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_09:55:09 Dec 2 04:55:09 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 04:55:09 localhost ceph-mgr[287188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 2 04:55:09 localhost systemd[1]: session-64.scope: Deactivated successfully. Dec 2 04:55:09 localhost systemd[1]: session-64.scope: Consumed 17.128s CPU time. Dec 2 04:55:09 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Dec 2 04:55:09 localhost systemd-logind[760]: Removed session 64. Dec 2 04:55:09 localhost ceph-mgr[287188]: [cephadm WARNING root] removing stray HostCache host record np0005541909.localdomain.devices.0 Dec 2 04:55:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005541909.localdomain.devices.0 Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: cephadm Dec 2 04:55:09 localhost ceph-mgr[287188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: crash Dec 2 04:55:09 localhost ceph-mgr[287188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: devicehealth Dec 2 04:55:09 localhost ceph-mgr[287188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: iostat Dec 2 04:55:09 localhost ceph-mgr[287188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: nfs Dec 2 04:55:09 localhost ceph-mgr[287188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: orchestrator Dec 2 04:55:09 localhost ceph-mgr[287188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: pg_autoscaler Dec 2 04:55:09 localhost ceph-mgr[287188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: progress Dec 2 04:55:09 localhost ceph-mgr[287188]: [progress INFO root] Loading... Dec 2 04:55:09 localhost ceph-mgr[287188]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 2 04:55:09 localhost ceph-mgr[287188]: [devicehealth INFO root] Starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: [progress INFO root] Loaded OSDMap, ready. Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] recovery thread starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] starting setup Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: rbd_support Dec 2 04:55:09 localhost ceph-mgr[287188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: restful Dec 2 04:55:09 localhost ceph-mgr[287188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: status Dec 2 04:55:09 localhost ceph-mgr[287188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: telemetry Dec 2 04:55:09 localhost ceph-mgr[287188]: [restful INFO root] server_addr: :: server_port: 8003 Dec 2 04:55:09 localhost ceph-mgr[287188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:55:09 localhost ceph-mgr[287188]: [restful WARNING root] server not running: no certificate configured Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 04:55:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 04:55:09 localhost ceph-mgr[287188]: mgr load Constructed class from module: volumes Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] PerfHandler: starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: images, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] TaskHandler: starting Dec 2 04:55:09 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} v 0) Dec 2 04:55:09 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 2 04:55:09 localhost ceph-mgr[287188]: [rbd_support INFO root] setup complete Dec 2 04:55:09 localhost sshd[292837]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='client.? 172.18.0.200:0/2202206912' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: Activating manager daemon np0005541914.lljzmk Dec 2 04:55:09 localhost ceph-mon[288526]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:55:09 localhost ceph-mon[288526]: Manager daemon np0005541914.lljzmk is now available Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 04:55:09 localhost systemd-logind[760]: New session 65 of user ceph-admin. Dec 2 04:55:10 localhost systemd[1]: Started Session 65 of User ceph-admin. Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:10 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Bus STARTING Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Bus STARTING Dec 2 04:55:10 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765 Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765 Dec 2 04:55:10 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150 Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150 Dec 2 04:55:10 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:55:10 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Bus STARTED Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:55:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Bus STARTED Dec 2 04:55:10 localhost ceph-mon[288526]: removing stray HostCache host record np0005541909.localdomain.devices.0 Dec 2 04:55:10 localhost podman[292972]: 2025-12-02 09:55:10.99150984 +0000 UTC m=+0.095196049 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218) Dec 2 04:55:11 localhost podman[292972]: 2025-12-02 09:55:11.096003142 +0000 UTC m=+0.199689351 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc.) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:11 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:11 localhost ceph-mgr[287188]: [devicehealth INFO root] Check health Dec 2 04:55:11 localhost ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Bus STARTING Dec 2 04:55:11 localhost ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765 Dec 2 04:55:11 localhost ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150 Dec 2 04:55:11 localhost ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:55:11 localhost ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Bus STARTED Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:11 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost openstack_network_exporter[241816]: ERROR 09:55:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:55:12 localhost openstack_network_exporter[241816]: ERROR 09:55:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:12 localhost openstack_network_exporter[241816]: ERROR 09:55:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:12 localhost openstack_network_exporter[241816]: ERROR 09:55:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:55:12 localhost openstack_network_exporter[241816]: Dec 2 04:55:12 localhost openstack_network_exporter[241816]: ERROR 09:55:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:55:12 localhost openstack_network_exporter[241816]: Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 2 04:55:12 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:55:12 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:55:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:55:12 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:55:12 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:13 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:55:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:14 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mgr.np0005541911.adcgiw 172.18.0.105:0/3351624532; not ready for session (expect reconnect) Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 22 op/s Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:55:15 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5)) Dec 2 04:55:15 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5)) Dec 2 04:55:15 localhost ceph-mgr[287188]: [progress INFO root] Completed event 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Dec 2 04:55:15 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:55:15 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:55:16 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:55:16 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:55:16 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:16 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:16 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:16 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:16 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:55:16 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:55:16 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:17 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:17 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:17 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:17 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:17 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:17 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:17 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:17 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:17 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:17 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:17 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:17 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:17 localhost ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:55:17 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:55:17 localhost ceph-mon[288526]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 2 04:55:17 localhost ceph-mon[288526]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 2 04:55:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:17 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s Dec 2 04:55:18 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:18 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:18 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:55:18 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:18 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:18 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:18 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:18 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:19 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:19 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:19 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:19 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:19 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:19 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:19 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:19 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:55:19 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:55:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Dec 2 04:55:19 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:20 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:20 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:20 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:20 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34293 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:21 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s Dec 2 04:55:23 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s Dec 2 04:55:23 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:23 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26775 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:23 localhost ceph-mgr[287188]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 2 04:55:23 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 2 04:55:23 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:55:24 localhost podman[293887]: 2025-12-02 09:55:24.073421172 +0000 UTC m=+0.076420302 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:55:24 localhost podman[293887]: 2025-12-02 09:55:24.081485449 +0000 UTC m=+0.084484559 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm) Dec 2 04:55:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:24 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:55:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:55:24 localhost podman[293886]: 2025-12-02 09:55:24.20117778 +0000 UTC m=+0.203321695 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:55:24 localhost podman[293886]: 2025-12-02 09:55:24.24281147 +0000 UTC m=+0.244955345 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:55:24 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:55:24 localhost systemd[1]: tmp-crun.wsgmaz.mount: Deactivated successfully. Dec 2 04:55:24 localhost podman[293918]: 2025-12-02 09:55:24.349471984 +0000 UTC m=+0.139609860 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:55:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:55:24 localhost podman[293918]: 2025-12-02 09:55:24.368956659 +0000 UTC m=+0.159094555 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:55:24 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:55:24 localhost podman[293945]: 2025-12-02 09:55:24.437873561 +0000 UTC m=+0.064054495 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Dec 2 04:55:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:24 localhost podman[293945]: 2025-12-02 09:55:24.470825956 +0000 UTC m=+0.097006960 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:55:24 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:24 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:24 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:24 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:55:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:24 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:24 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:24 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:24 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:24 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:25 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:25 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:25 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:25 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:25 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:55:25 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:25 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:25 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:25 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:25 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s Dec 2 04:55:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34303 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:55:26 localhost ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:26 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:26 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:26 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:26 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:26 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:26 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:26 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:26 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:26 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:26 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:26 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:26 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:26 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:26 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:26 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:26 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:27 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:27 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:27 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:27 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:27 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:27 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:27 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:27 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:27 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:27 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:27 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:27 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:27 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:28 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:28 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:28 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:28 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:28 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:28 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:28 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:28 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:28 localhost ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:28 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:28 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:28 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:28 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:28 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:28 localhost podman[294023]: Dec 2 04:55:28 localhost podman[294023]: 2025-12-02 09:55:28.997439428 +0000 UTC m=+0.067652875 container create 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True) Dec 2 04:55:29 localhost systemd[1]: Started libpod-conmon-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope. Dec 2 04:55:29 localhost systemd[1]: Started libcrun container. Dec 2 04:55:29 localhost podman[294023]: 2025-12-02 09:55:28.96569189 +0000 UTC m=+0.035905367 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:29 localhost podman[294023]: 2025-12-02 09:55:29.065149084 +0000 UTC m=+0.135362501 container init 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:29 localhost systemd[1]: tmp-crun.j6BuT7.mount: Deactivated successfully. Dec 2 04:55:29 localhost podman[294023]: 2025-12-02 09:55:29.076218781 +0000 UTC m=+0.146432208 container start 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:29 localhost podman[294023]: 2025-12-02 09:55:29.076399427 +0000 UTC m=+0.146612844 container attach 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4) Dec 2 04:55:29 localhost sweet_poitras[294038]: 167 167 Dec 2 04:55:29 localhost systemd[1]: libpod-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope: Deactivated successfully. Dec 2 04:55:29 localhost podman[294023]: 2025-12-02 09:55:29.080124181 +0000 UTC m=+0.150337618 container died 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, version=7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:55:29 localhost podman[294043]: 2025-12-02 09:55:29.176896033 +0000 UTC m=+0.083711625 container remove 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True) Dec 2 04:55:29 localhost systemd[1]: libpod-conmon-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope: Deactivated successfully. Dec 2 04:55:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:29 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:29 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 2 04:55:29 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:29 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:29 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:29 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:29 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:29 localhost ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:29 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:29 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:29 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:29 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:30 localhost systemd[1]: var-lib-containers-storage-overlay-75827950c3aa79aeb0b5fc736566075ba36b4baafa3f6e488c62ea62fd257b68-merged.mount: Deactivated successfully. Dec 2 04:55:30 localhost podman[294112]: Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.212386332 +0000 UTC m=+0.340387736 container create 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, release=1763362218, RELEASE=main, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:55:30 localhost systemd[1]: Started libpod-conmon-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope. Dec 2 04:55:30 localhost systemd[1]: Started libcrun container. Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.265379398 +0000 UTC m=+0.393380792 container init 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.275018162 +0000 UTC m=+0.403019586 container start 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.275263139 +0000 UTC m=+0.403264533 container attach 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:30 localhost gifted_raman[294127]: 167 167 Dec 2 04:55:30 localhost systemd[1]: libpod-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope: Deactivated successfully. Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.278876709 +0000 UTC m=+0.406878183 container died 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True) Dec 2 04:55:30 localhost podman[294112]: 2025-12-02 09:55:30.193176785 +0000 UTC m=+0.321178189 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:30 localhost podman[294132]: 2025-12-02 09:55:30.349571397 +0000 UTC m=+0.062443477 container remove 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 2 04:55:30 localhost systemd[1]: libpod-conmon-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope: Deactivated successfully. Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:30 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:30 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 2 04:55:30 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:30 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:30 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:30 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:30 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:30 localhost ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:30 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:31 localhost systemd[1]: var-lib-containers-storage-overlay-4fa6dd665b415bf675226148c9c04b8d656c1849e9bfb689c08e86b332797a55-merged.mount: Deactivated successfully. Dec 2 04:55:31 localhost podman[294209]: Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.118346469 +0000 UTC m=+0.073462772 container create 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.openshift.expose-services=, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 2 04:55:31 localhost systemd[1]: Started libpod-conmon-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope. Dec 2 04:55:31 localhost systemd[1]: Started libcrun container. Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.17902026 +0000 UTC m=+0.134136603 container init 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, build-date=2025-11-26T19:44:28Z, name=rhceph, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.087484017 +0000 UTC m=+0.042600340 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:31 localhost friendly_pike[294224]: 167 167 Dec 2 04:55:31 localhost systemd[1]: libpod-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope: Deactivated successfully. Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.198801094 +0000 UTC m=+0.153917387 container start 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, distribution-scope=public, vcs-type=git, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.199060422 +0000 UTC m=+0.154176755 container attach 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:31 localhost podman[294209]: 2025-12-02 09:55:31.205346553 +0000 UTC m=+0.160462906 container died 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 04:55:31 localhost podman[294229]: 2025-12-02 09:55:31.267831009 +0000 UTC m=+0.064211319 container remove 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 04:55:31 localhost systemd[1]: libpod-conmon-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope: Deactivated successfully. Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:31 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:31 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:55:31 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:31 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:31 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:31 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:31 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:31 localhost ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:31 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:31 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:32 localhost systemd[1]: tmp-crun.bGhaFz.mount: Deactivated successfully. Dec 2 04:55:32 localhost systemd[1]: var-lib-containers-storage-overlay-25fdf91b5a4a561e562d191cbbe12b31407dec2f76e140f8af8253e7eb2cd35f-merged.mount: Deactivated successfully. Dec 2 04:55:32 localhost podman[294303]: Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.03882039 +0000 UTC m=+0.065737657 container create 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public) Dec 2 04:55:32 localhost systemd[1]: Started libpod-conmon-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope. Dec 2 04:55:32 localhost systemd[1]: Started libcrun container. Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.098757658 +0000 UTC m=+0.125674915 container init 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=) Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.107162284 +0000 UTC m=+0.134079561 container start 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.buildah.version=1.41.4, release=1763362218, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7) Dec 2 04:55:32 localhost crazy_faraday[294318]: 167 167 Dec 2 04:55:32 localhost systemd[1]: libpod-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope: Deactivated successfully. Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.108798185 +0000 UTC m=+0.135715452 container attach 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public) Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.112773416 +0000 UTC m=+0.139690703 container died 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:55:32 localhost podman[294303]: 2025-12-02 09:55:32.014344773 +0000 UTC m=+0.041262030 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) Dec 2 04:55:32 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2343995021' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch Dec 2 04:55:32 localhost podman[294323]: 2025-12-02 09:55:32.208811085 +0000 UTC m=+0.087273423 container remove 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:32 localhost systemd[1]: libpod-conmon-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope: Deactivated successfully. Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:32 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:32 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:32 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:32 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:32 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:32 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:32 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:32 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:32 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:32 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:32 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:32 localhost podman[294393]: Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.913880365 +0000 UTC m=+0.069513022 container create 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, description=Red Hat Ceph Storage 7) Dec 2 04:55:32 localhost systemd[1]: Started libpod-conmon-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope. Dec 2 04:55:32 localhost systemd[1]: Started libcrun container. Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.964317363 +0000 UTC m=+0.119950060 container init 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public) Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.970294575 +0000 UTC m=+0.125927272 container start 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.970546833 +0000 UTC m=+0.126179580 container attach 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:32 localhost optimistic_williamson[294408]: 167 167 Dec 2 04:55:32 localhost systemd[1]: libpod-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope: Deactivated successfully. Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.972710689 +0000 UTC m=+0.128343416 container died 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:55:32 localhost podman[294393]: 2025-12-02 09:55:32.889132979 +0000 UTC m=+0.044765706 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:33 localhost systemd[1]: var-lib-containers-storage-overlay-559d91ad8e4feeefa8aa55cac4ca58e7f2368af0e47ea3de773610061157afda-merged.mount: Deactivated successfully. Dec 2 04:55:33 localhost systemd[1]: var-lib-containers-storage-overlay-10a1701e9964ae77602873677580eee5101932eab6b2ceef44f256ffaea7d209-merged.mount: Deactivated successfully. Dec 2 04:55:33 localhost podman[294413]: 2025-12-02 09:55:33.066706247 +0000 UTC m=+0.089290265 container remove 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Dec 2 04:55:33 localhost systemd[1]: libpod-conmon-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope: Deactivated successfully. Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:33 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:33 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:33 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:33 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:33 localhost podman[239757]: time="2025-12-02T09:55:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:55:33 localhost podman[239757]: @ - - [02/Dec/2025:09:55:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:55:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26961 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541910", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:55:33 localhost podman[239757]: @ - - [02/Dec/2025:09:55:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19185 "" "Go-http-client/1.1" Dec 2 04:55:33 localhost podman[294481]: Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.737630674 +0000 UTC m=+0.090554783 container create 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7) Dec 2 04:55:33 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:33 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:33 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:33 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:33 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:33 localhost systemd[1]: Started libpod-conmon-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope. Dec 2 04:55:33 localhost systemd[1]: Started libcrun container. Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.791567489 +0000 UTC m=+0.144491628 container init 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64) Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.696614083 +0000 UTC m=+0.049538202 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.801857474 +0000 UTC m=+0.154781583 container start 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.802091661 +0000 UTC m=+0.155015810 container attach 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:55:33 localhost recursing_hopper[294495]: 167 167 Dec 2 04:55:33 localhost systemd[1]: libpod-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope: Deactivated successfully. Dec 2 04:55:33 localhost podman[294481]: 2025-12-02 09:55:33.804875866 +0000 UTC m=+0.157800005 container died 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:33 localhost podman[294502]: 2025-12-02 09:55:33.885710332 +0000 UTC m=+0.069789900 container remove 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git) Dec 2 04:55:33 localhost systemd[1]: libpod-conmon-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope: Deactivated successfully. Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:55:33 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:33 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:55:34 localhost systemd[1]: var-lib-containers-storage-overlay-f2e82613354c02fe07e8bc59a452e134c0a28d0502c85b5ed9c570070636b1d6-merged.mount: Deactivated successfully. Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:55:34 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5)) Dec 2 04:55:34 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5)) Dec 2 04:55:34 localhost ceph-mgr[287188]: [progress INFO root] Completed event 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:55:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.395995) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334396082, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 265, "total_data_size": 8432363, "memory_usage": 9082480, "flush_reason": "Manual Compaction"} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334421782, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4802871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13727, "largest_seqno": 16078, "table_properties": {"data_size": 4793637, "index_size": 5483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23848, "raw_average_key_size": 22, "raw_value_size": 4773312, "raw_average_value_size": 4456, "num_data_blocks": 229, "num_entries": 1071, "num_filter_entries": 1071, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669289, "oldest_key_time": 1764669289, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 25844 microseconds, and 9888 cpu microseconds. Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.421844) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4802871 bytes OK Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.421871) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423688) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423713) EVENT_LOG_v1 {"time_micros": 1764669334423705, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 8420670, prev total WAL file size 8420670, number of live WAL files 2. Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.425294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303238' seq:72057594037927935, type:22 .. '6B760031323930' seq:0, type:0; will stop at (end) Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4690KB)], [21(12MB)] Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334425371, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 17571689, "oldest_snapshot_seqno": -1} Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:55:34 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:55:34 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:34 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:34 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:34 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:55:34 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10250 keys, 16745726 bytes, temperature: kUnknown Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334544634, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16745726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16683944, "index_size": 35057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 275363, "raw_average_key_size": 26, "raw_value_size": 16505076, "raw_average_value_size": 1610, "num_data_blocks": 1339, "num_entries": 10250, "num_filter_entries": 10250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.544945) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16745726 bytes Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.547361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 140.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.2 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 10740, records dropped: 490 output_compression: NoCompression Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.547409) EVENT_LOG_v1 {"time_micros": 1764669334547395, "job": 10, "event": "compaction_finished", "compaction_time_micros": 119365, "compaction_time_cpu_micros": 48232, "output_level": 6, "num_output_files": 1, "total_output_size": 16745726, "num_input_records": 10740, "num_output_records": 10250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334548264, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334550155, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.425199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost podman[294535]: 2025-12-02 09:55:34.573597717 +0000 UTC m=+0.104587402 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:55:34 localhost podman[294535]: 2025-12-02 09:55:34.615887227 +0000 UTC m=+0.146876922 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:55:34 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:55:34 localhost podman[294536]: 2025-12-02 09:55:34.616495786 +0000 UTC m=+0.143872431 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, vendor=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, config_id=edpm) Dec 2 04:55:34 localhost podman[294536]: 2025-12-02 09:55:34.70088251 +0000 UTC m=+0.228259165 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm) Dec 2 04:55:34 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:55:34 localhost ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:34 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34287 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541910"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:35 localhost ceph-mgr[287188]: [cephadm INFO root] Remove daemons mon.np0005541910 Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541910 Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "quorum_status"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "quorum_status"} : dispatch Dec 2 04:55:35 localhost ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:55:35 localhost ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541910 from monmap... Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing monitor np0005541910 from monmap... Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon rm", "name": "np0005541910"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541910"} : dispatch Dec 2 04:55:35 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports [] Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports [] Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@2(peon) e10 my rank is now 1 (was 2) Dec 2 04:55:35 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:55:35 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:55:35 localhost ceph-mon[288526]: paxos.1).electionLogic(42) init, last seen epoch 42 Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:55:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:35 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:55:35 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:55:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:37 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:37 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:37 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:37 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:55:37 localhost ceph-mon[288526]: Remove daemons mon.np0005541910 Dec 2 04:55:37 localhost ceph-mon[288526]: Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:55:37 localhost ceph-mon[288526]: Removing monitor np0005541910 from monmap... Dec 2 04:55:37 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541910"} : dispatch Dec 2 04:55:37 localhost ceph-mon[288526]: Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports [] Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3) Dec 2 04:55:37 localhost ceph-mon[288526]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[288526]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[288526]: stray daemon mgr.np0005541909.kfesnk on host np0005541909.localdomain not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[288526]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[288526]: stray host np0005541909.localdomain has 1 stray daemons: ['mgr.np0005541909.kfesnk'] Dec 2 04:55:37 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:37 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:37 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:37 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:37 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:55:38 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:38 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:55:38 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:38 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:38 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34330 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:38 localhost ceph-mgr[287188]: [cephadm INFO root] Removed label mon from host np0005541910.localdomain Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label mon from host np0005541910.localdomain Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:38 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:55:38 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:39 localhost podman[294575]: 2025-12-02 09:55:39.060312491 +0000 UTC m=+0.063455878 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0) Dec 2 04:55:39 localhost podman[294575]: 2025-12-02 09:55:39.098228668 +0000 UTC m=+0.101372025 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:55:39 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:55:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:39 localhost ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:55:39 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:55:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[288526]: Removed label mon from host np0005541910.localdomain Dec 2 04:55:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:39 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:39 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:55:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:55:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:39 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:39 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:39 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:39 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:39 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:39 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:40 localhost ceph-mgr[287188]: [cephadm INFO root] Removed label mgr from host np0005541910.localdomain Dec 2 04:55:40 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005541910.localdomain Dec 2 04:55:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:55:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.854626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340854669, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 528, "num_deletes": 251, "total_data_size": 502557, "memory_usage": 512808, "flush_reason": "Manual Compaction"} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340859371, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 315167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16084, "largest_seqno": 16606, "table_properties": {"data_size": 312185, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8645, "raw_average_key_size": 21, "raw_value_size": 305665, "raw_average_value_size": 764, "num_data_blocks": 37, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669334, "oldest_key_time": 1764669334, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4790 microseconds, and 1820 cpu microseconds. Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.859416) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 315167 bytes OK Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.859436) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861180) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861204) EVENT_LOG_v1 {"time_micros": 1764669340861198, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 499246, prev total WAL file size 499246, number of live WAL files 2. Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861869) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(307KB)], [24(15MB)] Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340861917, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 17060893, "oldest_snapshot_seqno": -1} Dec 2 04:55:40 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:55:40 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:55:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:40 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:40 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:40 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:55:40 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10123 keys, 14985547 bytes, temperature: kUnknown Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340968613, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 14985547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14926324, "index_size": 32818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 273610, "raw_average_key_size": 27, "raw_value_size": 14751302, "raw_average_value_size": 1457, "num_data_blocks": 1241, "num_entries": 10123, "num_filter_entries": 10123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.968949) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 14985547 bytes Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.971679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.7 rd, 140.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(101.7) write-amplify(47.5) OK, records in: 10650, records dropped: 527 output_compression: NoCompression Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.971709) EVENT_LOG_v1 {"time_micros": 1764669340971696, "job": 12, "event": "compaction_finished", "compaction_time_micros": 106838, "compaction_time_cpu_micros": 44360, "output_level": 6, "num_output_files": 1, "total_output_size": 14985547, "num_input_records": 10650, "num_output_records": 10123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340972012, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340974488, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.975131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26796 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:41 localhost ceph-mgr[287188]: [cephadm INFO root] Removed label _admin from host np0005541910.localdomain Dec 2 04:55:41 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005541910.localdomain Dec 2 04:55:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:41 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:41 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:55:41 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:41 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:41 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:41 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:42 localhost ceph-mon[288526]: Removed label mgr from host np0005541910.localdomain Dec 2 04:55:42 localhost ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:55:42 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:55:42 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:42 localhost openstack_network_exporter[241816]: ERROR 09:55:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:55:42 localhost openstack_network_exporter[241816]: ERROR 09:55:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:42 localhost openstack_network_exporter[241816]: ERROR 09:55:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:42 localhost openstack_network_exporter[241816]: ERROR 09:55:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:55:42 localhost openstack_network_exporter[241816]: Dec 2 04:55:42 localhost openstack_network_exporter[241816]: ERROR 09:55:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:55:42 localhost openstack_network_exporter[241816]: Dec 2 04:55:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:43 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:43 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:55:43 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:43 localhost ceph-mon[288526]: Removed label _admin from host np0005541910.localdomain Dec 2 04:55:43 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:43 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:43 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:43 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:43 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:44 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:44 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:44 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:44 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:44 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:44 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:55:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:44 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:44 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:44 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:45 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:45 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:45 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:45 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:45 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:45 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:45 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:45 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:45 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:45 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:45 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:46 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:46 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:46 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:55:46 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:46 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:46 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:46 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:46 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:47 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:47 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:47 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:47 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:47 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 2 04:55:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:55:47 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:47 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:47 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:47 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:47 localhost ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:47 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:47 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:55:48 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:48 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:48 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:48 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:48 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Dec 2 04:55:48 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:55:48 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:48 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:48 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:48 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:48 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:48 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:48 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:48 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:48 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:55:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:49 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:49 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:55:49 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:49 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:49 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:49 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:49 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:49 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:49 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:49 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:49 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:49 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:50 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:50 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:50 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:50 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:50 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:50 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:50 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:50 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:50 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:50 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:50 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:50 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:50 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:51 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:51 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:51 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:51 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:51 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:51 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:51 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:51 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:51 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:51 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:51 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:51 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.635 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.636 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.636 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.666 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.666 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:55:51 localhost nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:55:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:51 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:51 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:51 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:51 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:51 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.101 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.276 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.277 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11960MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.278 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.278 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:52 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.363 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.363 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.383 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:55:52 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:55:52 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:52 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:52 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:55:52 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:52 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:52 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:52 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:52 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.796 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.802 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.835 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.837 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:55:52 localhost nova_compute[281045]: 2025-12-02 09:55:52.837 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:52 localhost ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:52 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:52 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:52 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:52 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:52 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:52 localhost podman[294692]: Dec 2 04:55:52 localhost podman[294692]: 2025-12-02 09:55:52.99758517 +0000 UTC m=+0.073236735 container create aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:55:53 localhost systemd[1]: Started libpod-conmon-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope. Dec 2 04:55:53 localhost systemd[1]: Started libcrun container. Dec 2 04:55:53 localhost podman[294692]: 2025-12-02 09:55:52.968382749 +0000 UTC m=+0.044034324 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:53 localhost podman[294692]: 2025-12-02 09:55:53.081575072 +0000 UTC m=+0.157226637 container init aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, description=Red Hat Ceph Storage 7, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 2 04:55:53 localhost podman[294692]: 2025-12-02 09:55:53.092287709 +0000 UTC m=+0.167939264 container start aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=1763362218, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:53 localhost podman[294692]: 2025-12-02 09:55:53.092526476 +0000 UTC m=+0.168178071 container attach aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:55:53 localhost keen_nash[294707]: 167 167 Dec 2 04:55:53 localhost systemd[1]: libpod-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope: Deactivated successfully. Dec 2 04:55:53 localhost podman[294692]: 2025-12-02 09:55:53.095851027 +0000 UTC m=+0.171502612 container died aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541910.localdomain", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:53 localhost ceph-mgr[287188]: [cephadm INFO root] Added label _no_schedule to host np0005541910.localdomain Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005541910.localdomain Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:53 localhost ceph-mgr[287188]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain Dec 2 04:55:53 localhost podman[294712]: 2025-12-02 09:55:53.189026181 +0000 UTC m=+0.082460098 container remove aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:53 localhost systemd[1]: libpod-conmon-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope: Deactivated successfully. Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:53 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 2 04:55:53 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:53 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:53 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.730 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.730 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.731 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:55:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.832 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.833 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.833 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:53 localhost nova_compute[281045]: 2025-12-02 09:55:53.834 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:55:53 localhost ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:53 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:53 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:54 localhost systemd[1]: var-lib-containers-storage-overlay-65d7f68c5bbf7ababb6d19dc908c3fde10b2dbb3434069f7c13bf83e3d3d2751-merged.mount: Deactivated successfully. Dec 2 04:55:54 localhost podman[294780]: Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.120231748 +0000 UTC m=+0.079331741 container create 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:55:54 localhost systemd[1]: Started libpod-conmon-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope. Dec 2 04:55:54 localhost systemd[1]: Started libcrun container. Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.179538627 +0000 UTC m=+0.138638610 container init 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.088346116 +0000 UTC m=+0.047446149 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.192229845 +0000 UTC m=+0.151329828 container start 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.192578075 +0000 UTC m=+0.151678098 container attach 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc.) Dec 2 04:55:54 localhost systemd[1]: libpod-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope: Deactivated successfully. Dec 2 04:55:54 localhost naughty_shtern[294797]: 167 167 Dec 2 04:55:54 localhost podman[294780]: 2025-12-02 09:55:54.195275178 +0000 UTC m=+0.154375231 container died 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:54 localhost podman[294796]: 2025-12-02 09:55:54.259790326 +0000 UTC m=+0.101584780 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:55:54 localhost podman[294796]: 2025-12-02 09:55:54.269201442 +0000 UTC m=+0.110995916 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:55:54 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:55:54 localhost podman[294811]: 2025-12-02 09:55:54.351812263 +0000 UTC m=+0.147659606 container remove 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:54 localhost systemd[1]: libpod-conmon-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope: Deactivated successfully. Dec 2 04:55:54 localhost podman[294831]: 2025-12-02 09:55:54.416784875 +0000 UTC m=+0.116120914 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:55:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:55:54 localhost podman[294831]: 2025-12-02 09:55:54.456847587 +0000 UTC m=+0.156183596 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:55:54 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:55:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:54 localhost nova_compute[281045]: 2025-12-02 09:55:54.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:54 localhost podman[294859]: 2025-12-02 09:55:54.530213845 +0000 UTC m=+0.065489199 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:55:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:55:54 localhost podman[294859]: 2025-12-02 09:55:54.563633594 +0000 UTC m=+0.098908928 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:55:54 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:54 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 2 04:55:54 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:54 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:54 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:54 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:54 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:55:54 localhost podman[294880]: 2025-12-02 09:55:54.610923957 +0000 UTC m=+0.049898823 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 04:55:54 localhost podman[294880]: 2025-12-02 09:55:54.638810768 +0000 UTC m=+0.077785644 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller) Dec 2 04:55:54 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:55:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541910.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:55:54 localhost ceph-mon[288526]: Added label _no_schedule to host np0005541910.localdomain Dec 2 04:55:54 localhost ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain Dec 2 04:55:54 localhost ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:54 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:54 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:54 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:54 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:55 localhost systemd[1]: var-lib-containers-storage-overlay-5cde19fa133f0ceac604e06622698c40e92773af801f892f7cca39423688d386-merged.mount: Deactivated successfully. Dec 2 04:55:55 localhost podman[294958]: Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.076521111 +0000 UTC m=+0.066421227 container create f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:55 localhost systemd[1]: Started libpod-conmon-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope. Dec 2 04:55:55 localhost systemd[1]: Started libcrun container. Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.140573915 +0000 UTC m=+0.130473991 container init f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.050043954 +0000 UTC m=+0.039944030 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.150848539 +0000 UTC m=+0.140748615 container start f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph) Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.151018804 +0000 UTC m=+0.140918900 container attach f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, distribution-scope=public, ceph=True, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , version=7, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:55 localhost sleepy_solomon[294973]: 167 167 Dec 2 04:55:55 localhost systemd[1]: libpod-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope: Deactivated successfully. Dec 2 04:55:55 localhost podman[294958]: 2025-12-02 09:55:55.15350491 +0000 UTC m=+0.143405026 container died f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, version=7, ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=) Dec 2 04:55:55 localhost podman[294978]: 2025-12-02 09:55:55.235843521 +0000 UTC m=+0.076259187 container remove f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=) Dec 2 04:55:55 localhost systemd[1]: libpod-conmon-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope: Deactivated successfully. Dec 2 04:55:55 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:55 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:55 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:55 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:55 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:55:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:55 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:55 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:55 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:55 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:55 localhost nova_compute[281045]: 2025-12-02 09:55:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:55 localhost ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:55 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:55 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:55 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:55 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:55 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:56 localhost systemd[1]: var-lib-containers-storage-overlay-ca1387c74c649bf9b9b03b1f055508dc33fb7e9a42a0fd726056b8b6b73d8c2d-merged.mount: Deactivated successfully. Dec 2 04:55:56 localhost podman[295055]: Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:56.027804222 +0000 UTC m=+0.078789595 container create c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, version=7, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:56 localhost systemd[1]: Started libpod-conmon-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope. Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:55.99595052 +0000 UTC m=+0.046935933 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:56 localhost systemd[1]: Started libcrun container. Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:56.126341017 +0000 UTC m=+0.177326400 container init c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, release=1763362218, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public) Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:56.135945221 +0000 UTC m=+0.186930614 container start c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:56.136295622 +0000 UTC m=+0.187281015 container attach c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7) Dec 2 04:55:56 localhost nervous_ramanujan[295070]: 167 167 Dec 2 04:55:56 localhost systemd[1]: libpod-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope: Deactivated successfully. Dec 2 04:55:56 localhost podman[295055]: 2025-12-02 09:55:56.140244692 +0000 UTC m=+0.191230085 container died c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, version=7) Dec 2 04:55:56 localhost podman[295075]: 2025-12-02 09:55:56.233496056 +0000 UTC m=+0.084277212 container remove c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container) Dec 2 04:55:56 localhost systemd[1]: libpod-conmon-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope: Deactivated successfully. Dec 2 04:55:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:56 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:56 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:55:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:55:56 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:55:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:56 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:56 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:56 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:56 localhost podman[295145]: Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.893858492 +0000 UTC m=+0.063951902 container create d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:56 localhost systemd[1]: Started libpod-conmon-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope. Dec 2 04:55:56 localhost systemd[1]: Started libcrun container. Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.863710302 +0000 UTC m=+0.033803732 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.968127558 +0000 UTC m=+0.138220958 container init d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, release=1763362218, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.976569856 +0000 UTC m=+0.146663256 container start d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218) Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.976815483 +0000 UTC m=+0.146908933 container attach d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main) Dec 2 04:55:56 localhost brave_satoshi[295160]: 167 167 Dec 2 04:55:56 localhost systemd[1]: libpod-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope: Deactivated successfully. Dec 2 04:55:56 localhost podman[295145]: 2025-12-02 09:55:56.979066751 +0000 UTC m=+0.149160231 container died d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z) Dec 2 04:55:56 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:56 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:56 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:56 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:56 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:56 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:57 localhost systemd[1]: var-lib-containers-storage-overlay-0ddfc7e13fa5653ad296dc381eafec5c4a56023f55bc89db714b7a311a7d944c-merged.mount: Deactivated successfully. Dec 2 04:55:57 localhost systemd[1]: var-lib-containers-storage-overlay-2a3430851471d9366ece34bf59a3862aebf49438bc98f54a924f6db145391b4a-merged.mount: Deactivated successfully. Dec 2 04:55:57 localhost podman[295165]: 2025-12-02 09:55:57.053955066 +0000 UTC m=+0.066378966 container remove d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, vendor=Red Hat, Inc.) Dec 2 04:55:57 localhost systemd[1]: libpod-conmon-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope: Deactivated successfully. Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:57 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:57 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:57 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:57 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:57 localhost podman[295236]: Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.674498397 +0000 UTC m=+0.064692845 container create 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True) Dec 2 04:55:57 localhost systemd[1]: Started libpod-conmon-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope. Dec 2 04:55:57 localhost systemd[1]: Started libcrun container. Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.728336069 +0000 UTC m=+0.118530507 container init 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, version=7) Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.735161927 +0000 UTC m=+0.125356395 container start 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph) Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.735464956 +0000 UTC m=+0.125659404 container attach 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 2 04:55:57 localhost heuristic_noyce[295251]: 167 167 Dec 2 04:55:57 localhost systemd[1]: libpod-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope: Deactivated successfully. Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.73823297 +0000 UTC m=+0.128427468 container died 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container) Dec 2 04:55:57 localhost podman[295236]: 2025-12-02 09:55:57.652512916 +0000 UTC m=+0.042707364 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:57 localhost podman[295256]: 2025-12-02 09:55:57.801001786 +0000 UTC m=+0.056886786 container remove 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:57 localhost systemd[1]: libpod-conmon-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope: Deactivated successfully. Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:55:57 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:57 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:57 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:57 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:58 localhost systemd[1]: var-lib-containers-storage-overlay-fe233c1e727aa8b5c930368b8c0c5ae15db668281837d2eb4c2f7497817c7200-merged.mount: Deactivated successfully. Dec 2 04:55:58 localhost ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:58 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.44243 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541910.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:55:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:55:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} v 0) Dec 2 04:55:59 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO root] Removed host np0005541910.localdomain Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed host np0005541910.localdomain Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm ERROR cephadm.utils] executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [ERR] : executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:59.142+0000 7f5c56ad9640 -1 log_channel(cephadm) log [ERR] : executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed. Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Traceback (most recent call last): Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: return f(*arg) Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE) Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: host = self._get_stored_name(host) Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: self.assert_host(host) Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: raise OrchestratorError('host %s does not exist' % host) Dec 2 04:55:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist Dec 2 04:55:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:55:59 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:55:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:55:59 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:59 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:00 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch Dec 2 04:56:00 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch Dec 2 04:56:00 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"}]': finished Dec 2 04:56:00 localhost ceph-mon[288526]: Removed host np0005541910.localdomain Dec 2 04:56:00 localhost ceph-mon[288526]: executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist Dec 2 04:56:00 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:56:01 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:01 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:01 localhost ceph-mgr[287188]: [progress INFO root] Completed event 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:56:01 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:56:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:02 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:56:02 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:56:02 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:02 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:56:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:56:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:56:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:03 localhost podman[239757]: time="2025-12-02T09:56:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:56:03 localhost podman[239757]: @ - - [02/Dec/2025:09:56:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:56:03 localhost podman[239757]: @ - - [02/Dec/2025:09:56:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1" Dec 2 04:56:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 04:56:04 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 04:56:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 04:56:04 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 04:56:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:56:05 localhost systemd[1]: tmp-crun.YGSfSG.mount: Deactivated successfully. Dec 2 04:56:05 localhost podman[295611]: 2025-12-02 09:56:05.076967121 +0000 UTC m=+0.079569639 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git) Dec 2 04:56:05 localhost systemd[1]: tmp-crun.5Nif2E.mount: Deactivated successfully. Dec 2 04:56:05 localhost podman[295610]: 2025-12-02 09:56:05.127625756 +0000 UTC m=+0.129717438 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:56:05 localhost podman[295610]: 2025-12-02 09:56:05.13561194 +0000 UTC m=+0.137703622 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:56:05 localhost podman[295611]: 2025-12-02 09:56:05.144787009 +0000 UTC m=+0.147389567 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:56:05 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:56:05 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:56:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.44259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:56:07 localhost ceph-mgr[287188]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 2 04:56:07 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 2 04:56:07 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:56:07 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:07 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:07 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:56:07 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:07 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:56:07 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:07 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:07 localhost ceph-mgr[287188]: [progress INFO root] Completed event f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:56:07 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:56:07 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:56:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:08 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:56:08 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:08 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:08 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34329 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:56:09 localhost sshd[295672]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:09 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_09:56:09 Dec 2 04:56:09 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 04:56:09 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 04:56:09 localhost ceph-mgr[287188]: [balancer INFO root] pools ['images', 'backups', 'manila_data', 'volumes', 'vms', '.mgr', 'manila_metadata'] Dec 2 04:56:09 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:56:09 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:09 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26864 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541913"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:56:09 localhost ceph-mgr[287188]: [cephadm INFO root] Remove daemons mon.np0005541913 Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541913 Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "quorum_status"} v 0) Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "quorum_status"} : dispatch Dec 2 04:56:09 localhost ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912']) Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912']) Dec 2 04:56:09 localhost ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541913 from monmap... Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing monitor np0005541913 from monmap... Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mon rm", "name": "np0005541913"} v 0) Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541913"} : dispatch Dec 2 04:56:09 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports [] Dec 2 04:56:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports [] Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0) Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:56:09 localhost podman[295674]: 2025-12-02 09:56:09.868283097 +0000 UTC m=+0.078861686 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:56:09 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:56:09 localhost ceph-mon[288526]: paxos.1).electionLogic(44) init, last seen epoch 44 Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:56:09 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:56:09 localhost podman[295674]: 2025-12-02 09:56:09.906924996 +0000 UTC m=+0.117503595 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0) Dec 2 04:56:09 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:56:10 localhost ceph-mon[288526]: Remove daemons mon.np0005541913 Dec 2 04:56:10 localhost ceph-mon[288526]: Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912']) Dec 2 04:56:10 localhost ceph-mon[288526]: Removing monitor np0005541913 from monmap... Dec 2 04:56:10 localhost ceph-mon[288526]: Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports [] Dec 2 04:56:10 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:56:10 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:56:10 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:56:10 localhost ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541914,np0005541912 in quorum (ranks 0,1,2) Dec 2 04:56:10 localhost ceph-mon[288526]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:56:10 localhost ceph-mon[288526]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 2 04:56:10 localhost ceph-mon[288526]: stray daemon mgr.np0005541909.kfesnk on host np0005541909.localdomain not managed by cephadm Dec 2 04:56:10 localhost ceph-mon[288526]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:56:10 localhost ceph-mon[288526]: stray host np0005541909.localdomain has 1 stray daemons: ['mgr.np0005541909.kfesnk'] Dec 2 04:56:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:12 localhost openstack_network_exporter[241816]: ERROR 09:56:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:56:12 localhost openstack_network_exporter[241816]: ERROR 09:56:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:12 localhost openstack_network_exporter[241816]: ERROR 09:56:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:12 localhost openstack_network_exporter[241816]: ERROR 09:56:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:56:12 localhost openstack_network_exporter[241816]: Dec 2 04:56:12 localhost openstack_network_exporter[241816]: ERROR 09:56:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:56:12 localhost openstack_network_exporter[241816]: Dec 2 04:56:12 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:56:12 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:56:12 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:12 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:12 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:13 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:13 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:13 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:56:13 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:13 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:13 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:14 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:15 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:15 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:15 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:15 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:15 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:56:15 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:15 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:15 localhost ceph-mgr[287188]: [progress INFO root] Completed event e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:15 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:56:15 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:56:15 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:15 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:15 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:56:15 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:56:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:16 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:16 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:16 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:16 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:16 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:16 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:56:16 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:56:16 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:56:16 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:56:16 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:56:16 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:16 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:16 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:16 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:56:16 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:56:17 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:56:17 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:56:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:17 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:17 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:17 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:56:17 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:56:17 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:56:17 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:56:17 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:56:17 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:56:17 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:56:17 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:17 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:17 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:17 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:56:17 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:56:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:18 localhost ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:56:18 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:56:18 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:18 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:18 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:18 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:18 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:18 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 2 04:56:18 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 2 04:56:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:56:18 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:18 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:18 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:18 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:19 localhost ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:56:19 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:56:19 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:19 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:19 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:56:19 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:19 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:19 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 2 04:56:19 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 2 04:56:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:56:19 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:19 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:19 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:19 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:20 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:20 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:20 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:56:20 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:20 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:20 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:20 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:56:20 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:56:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:56:20 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:20 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:20 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:56:20 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:56:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:21 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:56:21 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:21 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:21 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:56:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:21 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:56:21 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:56:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:56:21 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:56:21 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:56:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:21 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:21 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:56:21 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:56:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:22 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:56:22 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:22 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:56:22 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:22 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26871 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541913.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:56:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:22 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:22 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:56:22 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:56:23 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:23 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:23 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 2 04:56:23 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 2 04:56:23 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 2 04:56:23 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:23 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:23 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:23 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:23 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:23 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:56:23 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:56:23 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:23 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:56:23 localhost ceph-mon[288526]: Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:56:23 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:23 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:23 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:24 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:24 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:56:24 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:56:25 localhost podman[296029]: 2025-12-02 09:56:25.081721397 +0000 UTC m=+0.080652761 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:25 localhost podman[296029]: 2025-12-02 09:56:25.168887136 +0000 UTC m=+0.167818540 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 2 04:56:25 localhost systemd[1]: tmp-crun.hmKeQO.mount: Deactivated successfully. Dec 2 04:56:25 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:56:25 localhost podman[296031]: 2025-12-02 09:56:25.193279021 +0000 UTC m=+0.184003725 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:56:25 localhost podman[296037]: 2025-12-02 09:56:25.121933095 +0000 UTC m=+0.113043571 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 04:56:25 localhost podman[296031]: 2025-12-02 09:56:25.232792316 +0000 UTC m=+0.223517050 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:56:25 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:56:25 localhost podman[296030]: 2025-12-02 09:56:25.270046602 +0000 UTC m=+0.263915682 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:56:25 localhost podman[296030]: 2025-12-02 09:56:25.279851462 +0000 UTC m=+0.273720572 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:56:25 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:56:25 localhost podman[296037]: 2025-12-02 09:56:25.308947639 +0000 UTC m=+0.300058125 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:25 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:56:25 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:25 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:25 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:25 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:26 localhost systemd[1]: tmp-crun.liInR3.mount: Deactivated successfully. Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:26 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 2 04:56:26 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Dec 2 04:56:26 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:26 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:26 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:26 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:26 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:26 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:26 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:27 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:27 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:27 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:56:27 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:56:27 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:56:27 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:27 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:27 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:27 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:56:27 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:56:27 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:27 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:27 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:27 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:27 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:56:27 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:27 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:27 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:28 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:29 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:29 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:29 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:29 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:29 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:56:29 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:56:29 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:56:29 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:56:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:29 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:29 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:56:29 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:56:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:30 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:56:30 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:56:30 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:30 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:30 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:56:30 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:30 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:30 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:30 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:30 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:30 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:30 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:30 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:31 localhost podman[296167]: Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.222302285 +0000 UTC m=+0.089318206 container create 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4) Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.162000676 +0000 UTC m=+0.029016617 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:31 localhost systemd[1]: Started libpod-conmon-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope. Dec 2 04:56:31 localhost systemd[1]: Started libcrun container. Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.45292866 +0000 UTC m=+0.319944541 container init 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7) Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.464911327 +0000 UTC m=+0.331927258 container start 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.465294198 +0000 UTC m=+0.332310169 container attach 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1763362218, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:56:31 localhost clever_shockley[296182]: 167 167 Dec 2 04:56:31 localhost systemd[1]: libpod-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope: Deactivated successfully. Dec 2 04:56:31 localhost podman[296167]: 2025-12-02 09:56:31.471565099 +0000 UTC m=+0.338581020 container died 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, ceph=True, io.openshift.expose-services=, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main) Dec 2 04:56:31 localhost podman[296187]: 2025-12-02 09:56:31.558328626 +0000 UTC m=+0.076977229 container remove 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z) Dec 2 04:56:31 localhost systemd[1]: libpod-conmon-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope: Deactivated successfully. Dec 2 04:56:31 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:31 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:31 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:31 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:31 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:31 localhost ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:56:31 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:56:31 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:31 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 2 04:56:31 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 2 04:56:31 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 2 04:56:31 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:31 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:31 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:31 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:31 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:32 localhost systemd[1]: var-lib-containers-storage-overlay-6c5524597fd87619ceaca8f45ad57a3108c2855219448df246d81f615ed5ab3a-merged.mount: Deactivated successfully. Dec 2 04:56:32 localhost podman[296257]: Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.275975269 +0000 UTC m=+0.058113133 container create 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.buildah.version=1.41.4, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1763362218, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph) Dec 2 04:56:32 localhost systemd[1]: Started libpod-conmon-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope. Dec 2 04:56:32 localhost systemd[1]: Started libcrun container. Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.348358667 +0000 UTC m=+0.130496531 container init 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.253611797 +0000 UTC m=+0.035749681 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.359507587 +0000 UTC m=+0.141645451 container start 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.360613411 +0000 UTC m=+0.142751305 container attach 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git) Dec 2 04:56:32 localhost vigilant_brahmagupta[296272]: 167 167 Dec 2 04:56:32 localhost systemd[1]: libpod-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope: Deactivated successfully. Dec 2 04:56:32 localhost podman[296257]: 2025-12-02 09:56:32.364044386 +0000 UTC m=+0.146182240 container died 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:32 localhost podman[296277]: 2025-12-02 09:56:32.437304771 +0000 UTC m=+0.068914174 container remove 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:32 localhost systemd[1]: libpod-conmon-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope: Deactivated successfully. Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:32 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:32 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:32 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:32 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 2 04:56:32 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:56:32 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:32 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:32 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:56:32 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:32 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:56:33 localhost podman[296353]: Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.203038091 +0000 UTC m=+0.059343172 container create 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z) Dec 2 04:56:33 localhost systemd[1]: tmp-crun.AbV84k.mount: Deactivated successfully. Dec 2 04:56:33 localhost systemd[1]: var-lib-containers-storage-overlay-9295b727f7d480e619a54cfb8191e82d04472c157df76542e8738afd771dafa7-merged.mount: Deactivated successfully. Dec 2 04:56:33 localhost systemd[1]: Started libpod-conmon-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope. Dec 2 04:56:33 localhost systemd[1]: Started libcrun container. Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.267371273 +0000 UTC m=+0.123676364 container init 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.170990983 +0000 UTC m=+0.027296094 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:33 localhost brave_jemison[296368]: 167 167 Dec 2 04:56:33 localhost systemd[1]: libpod-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope: Deactivated successfully. Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.284746443 +0000 UTC m=+0.141051514 container start 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, ceph=True, version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.287143016 +0000 UTC m=+0.143448127 container attach 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:56:33 localhost podman[296353]: 2025-12-02 09:56:33.290104637 +0000 UTC m=+0.146409738 container died 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, ceph=True) Dec 2 04:56:33 localhost podman[296373]: 2025-12-02 09:56:33.368246301 +0000 UTC m=+0.081046564 container remove 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 04:56:33 localhost systemd[1]: libpod-conmon-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope: Deactivated successfully. Dec 2 04:56:33 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:33 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:33 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:56:33 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:56:33 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:56:33 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:33 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:33 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:33 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:56:33 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:56:33 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:33 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:33 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:33 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:33 localhost podman[239757]: time="2025-12-02T09:56:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:56:33 localhost podman[239757]: @ - - [02/Dec/2025:09:56:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:56:33 localhost podman[239757]: @ - - [02/Dec/2025:09:56:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19187 "" "Go-http-client/1.1" Dec 2 04:56:33 localhost ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:56:33 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:56:33 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:33 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:33 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:33 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:34 localhost podman[296450]: Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.095246199 +0000 UTC m=+0.073991609 container create 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:56:34 localhost systemd[1]: Started libpod-conmon-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope. Dec 2 04:56:34 localhost systemd[1]: Started libcrun container. Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.160103178 +0000 UTC m=+0.138848578 container init 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.065077129 +0000 UTC m=+0.043822549 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:34 localhost hardcore_rosalind[296465]: 167 167 Dec 2 04:56:34 localhost systemd[1]: libpod-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope: Deactivated successfully. Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.17264258 +0000 UTC m=+0.151387990 container start 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vendor=Red Hat, Inc.) Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.173243609 +0000 UTC m=+0.151989009 container attach 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:56:34 localhost podman[296450]: 2025-12-02 09:56:34.175294311 +0000 UTC m=+0.154039721 container died 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:56:34 localhost systemd[1]: var-lib-containers-storage-overlay-7734eeb04191c15cc7274dfb084d8566c6ce911f27e13b565b97259b4fff01b8-merged.mount: Deactivated successfully. Dec 2 04:56:34 localhost systemd[1]: var-lib-containers-storage-overlay-c067362ddabcb46a05f9fd8754c83acb91112f35b7a0bce279ac39112d6e1fb3-merged.mount: Deactivated successfully. Dec 2 04:56:34 localhost podman[296470]: 2025-12-02 09:56:34.270539876 +0000 UTC m=+0.087787678 container remove 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 2 04:56:34 localhost systemd[1]: libpod-conmon-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope: Deactivated successfully. Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:34 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:56:34 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:56:34 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:56:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:34 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:56:34 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:34 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:34 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:34 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:34 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:34 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:56:34 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:56:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:34 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:34 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:34 localhost podman[296540]: Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.898631407 +0000 UTC m=+0.059854667 container create 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, name=rhceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, ceph=True, com.redhat.component=rhceph-container) Dec 2 04:56:34 localhost systemd[1]: Started libpod-conmon-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope. Dec 2 04:56:34 localhost systemd[1]: Started libcrun container. Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.947498018 +0000 UTC m=+0.108721308 container init 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.957134852 +0000 UTC m=+0.118358142 container start 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.957425631 +0000 UTC m=+0.118648921 container attach 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, GIT_CLEAN=True, io.openshift.expose-services=) Dec 2 04:56:34 localhost sad_wright[296555]: 167 167 Dec 2 04:56:34 localhost systemd[1]: libpod-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope: Deactivated successfully. Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.959550116 +0000 UTC m=+0.120773436 container died 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:34 localhost podman[296540]: 2025-12-02 09:56:34.869961463 +0000 UTC m=+0.031184743 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:35 localhost podman[296560]: 2025-12-02 09:56:35.044483507 +0000 UTC m=+0.073860364 container remove 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container) Dec 2 04:56:35 localhost systemd[1]: libpod-conmon-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope: Deactivated successfully. Dec 2 04:56:35 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:35 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:56:35 localhost systemd[1]: var-lib-containers-storage-overlay-6112c5ca23687b0135b492a4dbf9ae9cd1a1510812778a3381ff8b6380fa9154-merged.mount: Deactivated successfully. Dec 2 04:56:35 localhost podman[296596]: 2025-12-02 09:56:35.301943521 +0000 UTC m=+0.066090627 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:56:35 localhost podman[296596]: 2025-12-02 09:56:35.312128532 +0000 UTC m=+0.076275668 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, version=9.6, architecture=x86_64, io.openshift.expose-services=) Dec 2 04:56:35 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:56:35 localhost podman[296594]: 2025-12-02 09:56:35.364722796 +0000 UTC m=+0.130590955 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:56:35 localhost podman[296594]: 2025-12-02 09:56:35.37271664 +0000 UTC m=+0.138584809 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:56:35 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:56:35 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:35 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:35 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:35 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:35 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:56:35 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:56:35 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:35 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:36 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:36 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:56:36 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:36 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:36 localhost ceph-mgr[287188]: [progress INFO root] Completed event fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:56:36 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:56:36 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:56:36 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:36 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:37 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:56:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:56:37 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1875286268' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:56:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:56:37 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:37 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:37 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:37 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:37 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:38 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:38 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:38 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34342 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:56:38 localhost ceph-mgr[287188]: [cephadm INFO root] Reconfig service osd.default_drive_group Dec 2 04:56:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:38 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:56:38 localhost ceph-mgr[287188]: [progress INFO root] Completed event 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:38 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:38 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:38 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:38 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:39 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:39 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 04:56:39 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 04:56:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Dec 2 04:56:39 localhost ceph-mon[288526]: Reconfig service osd.default_drive_group Dec 2 04:56:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:39 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:39 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:39 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:39 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:39 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:39 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:39 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:56:40 localhost systemd[1]: tmp-crun.UZ9B64.mount: Deactivated successfully. Dec 2 04:56:40 localhost podman[296723]: 2025-12-02 09:56:40.075647631 +0000 UTC m=+0.074593506 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 04:56:40 localhost podman[296723]: 2025-12-02 09:56:40.112181886 +0000 UTC m=+0.111127761 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:56:40 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:56:40 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:40 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:40 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:40 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:40 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:56:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:56:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 2 04:56:41 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:41 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:41 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:41 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:41 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:41 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:41 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:41 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 04:56:41 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:41 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:41 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:41 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:41 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:41 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:42 localhost openstack_network_exporter[241816]: ERROR 09:56:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:42 localhost openstack_network_exporter[241816]: ERROR 09:56:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:42 localhost openstack_network_exporter[241816]: ERROR 09:56:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:56:42 localhost openstack_network_exporter[241816]: ERROR 09:56:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:56:42 localhost openstack_network_exporter[241816]: Dec 2 04:56:42 localhost openstack_network_exporter[241816]: ERROR 09:56:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:56:42 localhost openstack_network_exporter[241816]: Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:42 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:42 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:42 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:56:42 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:56:42 localhost ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:56:42 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:42 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:43 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:43 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:56:43 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:43 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr fail"} v 0) Dec 2 04:56:43 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/219576174' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:56:43 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 e88: 6 total, 6 up, 6 in Dec 2 04:56:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.307+0000 7f5cbd03f640 -1 mgr handle_mgr_map I was active but no longer am Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr handle_mgr_map I was active but no longer am Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn e: '/usr/bin/ceph-mgr' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 0: '/usr/bin/ceph-mgr' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 1: '-n' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 2: 'mgr.np0005541914.lljzmk' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 3: '-f' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 4: '--setuser' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 5: 'ceph' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 6: '--setgroup' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 7: 'ceph' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 8: '--default-log-to-file=false' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 9: '--default-log-to-journald=true' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn 10: '--default-log-to-stderr=false' Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn respawning with exe /usr/bin/ceph-mgr Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr respawn exe_path /proc/self/exe Dec 2 04:56:43 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Dec 2 04:56:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: ignoring --setuser ceph since I am not root Dec 2 04:56:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: ignoring --setgroup ceph since I am not root Dec 2 04:56:43 localhost ceph-mgr[287188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 2 04:56:43 localhost ceph-mgr[287188]: pidfile_write: ignore empty --pid-file Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr[py] Loading python module 'alerts' Dec 2 04:56:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.503+0000 7fd411515140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr[py] Loading python module 'balancer' Dec 2 04:56:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.576+0000 7fd411515140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:56:43 localhost ceph-mgr[287188]: mgr[py] Loading python module 'cephadm' Dec 2 04:56:43 localhost podman[296818]: Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.872129069 +0000 UTC m=+0.085593542 container create f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:56:43 localhost systemd[1]: Started libpod-conmon-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope. Dec 2 04:56:43 localhost systemd[1]: Started libcrun container. Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.832317704 +0000 UTC m=+0.045782207 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.935538413 +0000 UTC m=+0.149002886 container init f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph) Dec 2 04:56:43 localhost systemd[1]: tmp-crun.HilKA0.mount: Deactivated successfully. Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.949798048 +0000 UTC m=+0.163262521 container start f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, distribution-scope=public, vendor=Red Hat, Inc., version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z) Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.951696616 +0000 UTC m=+0.165161129 container attach f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:56:43 localhost quirky_zhukovsky[296833]: 167 167 Dec 2 04:56:43 localhost systemd[1]: libpod-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope: Deactivated successfully. Dec 2 04:56:43 localhost podman[296818]: 2025-12-02 09:56:43.956171673 +0000 UTC m=+0.169636156 container died f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Dec 2 04:56:44 localhost podman[296838]: 2025-12-02 09:56:44.036566345 +0000 UTC m=+0.076109513 container remove f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:56:44 localhost systemd[1]: libpod-conmon-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope: Deactivated successfully. Dec 2 04:56:44 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:44 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:44 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:44 localhost ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:44 localhost ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:44 localhost ceph-mon[288526]: from='client.? 172.18.0.200:0/219576174' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:56:44 localhost ceph-mon[288526]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:56:44 localhost ceph-mon[288526]: Activating manager daemon np0005541910.kzipdo Dec 2 04:56:44 localhost ceph-mon[288526]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Loading python module 'crash' Dec 2 04:56:44 localhost systemd[1]: session-65.scope: Deactivated successfully. Dec 2 04:56:44 localhost systemd[1]: session-65.scope: Consumed 21.680s CPU time. Dec 2 04:56:44 localhost systemd-logind[760]: Removed session 65. Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.242+0000 7fd411515140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Loading python module 'dashboard' Dec 2 04:56:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:44 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Loading python module 'devicehealth' Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.841+0000 7fd411515140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Loading python module 'diskprediction_local' Dec 2 04:56:44 localhost systemd[1]: var-lib-containers-storage-overlay-d968559f1d6c92a0c06c3c251309c8df174e75f0a8f8455d2bb8a59e80f71988-merged.mount: Deactivated successfully. Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: from numpy import show_config as show_numpy_config Dec 2 04:56:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.996+0000 7fd411515140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:56:44 localhost ceph-mgr[287188]: mgr[py] Loading python module 'influx' Dec 2 04:56:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.059+0000 7fd411515140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'insights' Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'iostat' Dec 2 04:56:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.185+0000 7fd411515140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'k8sevents' Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'localpool' Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'mds_autoscaler' Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'mirroring' Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'nfs' Dec 2 04:56:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.977+0000 7fd411515140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:56:45 localhost ceph-mgr[287188]: mgr[py] Loading python module 'orchestrator' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.133+0000 7fd411515140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'osd_perf_query' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.203+0000 7fd411515140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'osd_support' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.265+0000 7fd411515140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'pg_autoscaler' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.337+0000 7fd411515140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'progress' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.399+0000 7fd411515140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'prometheus' Dec 2 04:56:46 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.724+0000 7fd411515140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rbd_support' Dec 2 04:56:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.811+0000 7fd411515140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'restful' Dec 2 04:56:46 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rgw' Dec 2 04:56:47 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.154+0000 7fd411515140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'rook' Dec 2 04:56:47 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.605+0000 7fd411515140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'selftest' Dec 2 04:56:47 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.668+0000 7fd411515140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'snap_schedule' Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'stats' Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'status' Dec 2 04:56:47 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.872+0000 7fd411515140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'telegraf' Dec 2 04:56:47 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.930+0000 7fd411515140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:56:47 localhost ceph-mgr[287188]: mgr[py] Loading python module 'telemetry' Dec 2 04:56:48 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.058+0000 7fd411515140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Loading python module 'test_orchestrator' Dec 2 04:56:48 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.205+0000 7fd411515140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Loading python module 'volumes' Dec 2 04:56:48 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.397+0000 7fd411515140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Loading python module 'zabbix' Dec 2 04:56:48 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.455+0000 7fd411515140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:56:48 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Dec 2 04:56:48 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:49 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:50 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.639 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.640 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.640 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.641 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:56:52 localhost nova_compute[281045]: 2025-12-02 09:56:52.643 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:56:52 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:53 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:56:53 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2995218045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.078 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.240 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.241 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12021MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.767 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.768 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:56:53 localhost nova_compute[281045]: 2025-12-02 09:56:53.782 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:56:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:56:54 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/916855140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:56:54 localhost nova_compute[281045]: 2025-12-02 09:56:54.200 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:56:54 localhost nova_compute[281045]: 2025-12-02 09:56:54.204 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:56:54 localhost systemd[1]: Stopping User Manager for UID 1002... Dec 2 04:56:54 localhost systemd[26272]: Activating special unit Exit the Session... Dec 2 04:56:54 localhost systemd[26272]: Removed slice User Background Tasks Slice. Dec 2 04:56:54 localhost systemd[26272]: Stopped target Main User Target. Dec 2 04:56:54 localhost systemd[26272]: Stopped target Basic System. Dec 2 04:56:54 localhost systemd[26272]: Stopped target Paths. Dec 2 04:56:54 localhost systemd[26272]: Stopped target Sockets. Dec 2 04:56:54 localhost systemd[26272]: Stopped target Timers. Dec 2 04:56:54 localhost systemd[26272]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:56:54 localhost systemd[26272]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:56:54 localhost systemd[26272]: Closed D-Bus User Message Bus Socket. Dec 2 04:56:54 localhost systemd[26272]: Stopped Create User's Volatile Files and Directories. Dec 2 04:56:54 localhost systemd[26272]: Removed slice User Application Slice. Dec 2 04:56:54 localhost systemd[26272]: Reached target Shutdown. Dec 2 04:56:54 localhost systemd[26272]: Finished Exit the Session. Dec 2 04:56:54 localhost systemd[26272]: Reached target Exit the Session. Dec 2 04:56:54 localhost systemd[1]: user@1002.service: Deactivated successfully. Dec 2 04:56:54 localhost systemd[1]: Stopped User Manager for UID 1002. Dec 2 04:56:54 localhost systemd[1]: user@1002.service: Consumed 13.598s CPU time, read 0B from disk, written 7.0K to disk. Dec 2 04:56:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Dec 2 04:56:54 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Dec 2 04:56:54 localhost nova_compute[281045]: 2025-12-02 09:56:54.262 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:56:54 localhost nova_compute[281045]: 2025-12-02 09:56:54.266 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:56:54 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Dec 2 04:56:54 localhost nova_compute[281045]: 2025-12-02 09:56:54.266 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Dec 2 04:56:54 localhost systemd[1]: Removed slice User Slice of UID 1002. Dec 2 04:56:54 localhost systemd[1]: user-1002.slice: Consumed 4min 22.823s CPU time. Dec 2 04:56:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:54 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.267 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.268 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.268 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.282 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.283 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.284 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.284 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.285 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.285 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:56:55 localhost nova_compute[281045]: 2025-12-02 09:56:55.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:56:56 localhost systemd[1]: tmp-crun.SbiBXd.mount: Deactivated successfully. Dec 2 04:56:56 localhost podman[296914]: 2025-12-02 09:56:56.125731984 +0000 UTC m=+0.119624531 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:56:56 localhost podman[296915]: 2025-12-02 09:56:56.180518575 +0000 UTC m=+0.172327038 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 2 04:56:56 localhost podman[296913]: 2025-12-02 09:56:56.09250547 +0000 UTC m=+0.091749091 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:56 localhost podman[296914]: 2025-12-02 09:56:56.210280883 +0000 UTC m=+0.204173470 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:56:56 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:56:56 localhost podman[296913]: 2025-12-02 09:56:56.224776365 +0000 UTC m=+0.224019926 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:56:56 localhost podman[296922]: 2025-12-02 09:56:56.223997712 +0000 UTC m=+0.214890858 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:56:56 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:56:56 localhost podman[296915]: 2025-12-02 09:56:56.241586098 +0000 UTC m=+0.233394571 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:56:56 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:56:56 localhost podman[296922]: 2025-12-02 09:56:56.308065887 +0000 UTC m=+0.298959083 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:56 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:56:56 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:58 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:56:59 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:00 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:02 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:57:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:57:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:57:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:03 localhost podman[239757]: time="2025-12-02T09:57:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:57:03 localhost podman[239757]: @ - - [02/Dec/2025:09:57:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:57:03 localhost podman[239757]: @ - - [02/Dec/2025:09:57:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19182 "" "Go-http-client/1.1" Dec 2 04:57:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 04:57:04 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 04:57:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 04:57:04 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.505817) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424505855, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2767, "num_deletes": 253, "total_data_size": 7211164, "memory_usage": 7864776, "flush_reason": "Manual Compaction"} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424525858, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 4223005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16611, "largest_seqno": 19373, "table_properties": {"data_size": 4211723, "index_size": 7019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 28461, "raw_average_key_size": 22, "raw_value_size": 4187242, "raw_average_value_size": 3297, "num_data_blocks": 307, "num_entries": 1270, "num_filter_entries": 1270, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669340, "oldest_key_time": 1764669340, "file_creation_time": 1764669424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 20094 microseconds, and 6217 cpu microseconds. Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.525906) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 4223005 bytes OK Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.525930) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527753) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527769) EVENT_LOG_v1 {"time_micros": 1764669424527765, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7197842, prev total WAL file size 7197842, number of live WAL files 2. Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.529053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(4124KB)], [27(14MB)] Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424529118, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19208552, "oldest_snapshot_seqno": -1} Dec 2 04:57:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 10849 keys, 16102954 bytes, temperature: kUnknown Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424632040, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 16102954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16038686, "index_size": 36071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 290928, "raw_average_key_size": 26, "raw_value_size": 15850808, "raw_average_value_size": 1461, "num_data_blocks": 1382, "num_entries": 10849, "num_filter_entries": 10849, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.634166) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 16102954 bytes Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.636412) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.5 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 14.3 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 11393, records dropped: 544 output_compression: NoCompression Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.636487) EVENT_LOG_v1 {"time_micros": 1764669424636443, "job": 14, "event": "compaction_finished", "compaction_time_micros": 103008, "compaction_time_cpu_micros": 49161, "output_level": 6, "num_output_files": 1, "total_output_size": 16102954, "num_input_records": 11393, "num_output_records": 10849, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424637475, "job": 14, "event": "table_file_deletion", "file_number": 29} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424639984, "job": 14, "event": "table_file_deletion", "file_number": 27} Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.528943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:04 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:57:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:57:06 localhost podman[296999]: 2025-12-02 09:57:06.084302927 +0000 UTC m=+0.081029313 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:57:06 localhost podman[296999]: 2025-12-02 09:57:06.091271609 +0000 UTC m=+0.087997875 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:57:06 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:57:06 localhost podman[297000]: 2025-12-02 09:57:06.139721857 +0000 UTC m=+0.133089041 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:57:06 localhost podman[297000]: 2025-12-02 09:57:06.149795355 +0000 UTC m=+0.143162609 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter) Dec 2 04:57:06 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:57:06 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:08 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:09 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:10 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:10 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:57:11 localhost podman[297042]: 2025-12-02 09:57:11.116492041 +0000 UTC m=+0.122654693 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 2 04:57:11 localhost podman[297042]: 2025-12-02 09:57:11.129003572 +0000 UTC m=+0.135166214 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:57:11 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:57:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5039 writes, 22K keys, 5039 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5039 writes, 750 syncs, 6.72 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 193 writes, 569 keys, 193 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 193 writes, 73 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:57:12 localhost openstack_network_exporter[241816]: ERROR 09:57:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:12 localhost openstack_network_exporter[241816]: ERROR 09:57:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:12 localhost openstack_network_exporter[241816]: ERROR 09:57:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:57:12 localhost openstack_network_exporter[241816]: ERROR 09:57:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:57:12 localhost openstack_network_exporter[241816]: Dec 2 04:57:12 localhost openstack_network_exporter[241816]: ERROR 09:57:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:57:12 localhost openstack_network_exporter[241816]: Dec 2 04:57:12 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:14 localhost sshd[297060]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:57:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 e89: 6 total, 6 up, 6 in Dec 2 04:57:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:14 localhost ceph-mon[288526]: Activating manager daemon np0005541913.mfesdm Dec 2 04:57:14 localhost ceph-mon[288526]: Manager daemon np0005541910.kzipdo is unresponsive, replacing it with standby daemon np0005541913.mfesdm Dec 2 04:57:14 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:15 localhost sshd[297063]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:57:15 localhost systemd-logind[760]: New session 66 of user ceph-admin. Dec 2 04:57:15 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 2 04:57:15 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 2 04:57:15 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 2 04:57:15 localhost systemd[1]: Starting User Manager for UID 1002... Dec 2 04:57:15 localhost systemd[297067]: Queued start job for default target Main User Target. Dec 2 04:57:15 localhost systemd[297067]: Created slice User Application Slice. Dec 2 04:57:15 localhost systemd[297067]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:57:15 localhost systemd[297067]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:57:15 localhost systemd[297067]: Reached target Paths. Dec 2 04:57:15 localhost systemd[297067]: Reached target Timers. Dec 2 04:57:15 localhost systemd[297067]: Starting D-Bus User Message Bus Socket... Dec 2 04:57:15 localhost systemd[297067]: Starting Create User's Volatile Files and Directories... Dec 2 04:57:15 localhost systemd[297067]: Listening on D-Bus User Message Bus Socket. Dec 2 04:57:15 localhost systemd[297067]: Reached target Sockets. Dec 2 04:57:15 localhost systemd[297067]: Finished Create User's Volatile Files and Directories. Dec 2 04:57:15 localhost systemd[297067]: Reached target Basic System. Dec 2 04:57:15 localhost systemd[297067]: Reached target Main User Target. Dec 2 04:57:15 localhost systemd[297067]: Startup finished in 145ms. Dec 2 04:57:15 localhost systemd[1]: Started User Manager for UID 1002. Dec 2 04:57:15 localhost systemd[1]: Started Session 66 of User ceph-admin. Dec 2 04:57:15 localhost ceph-mon[288526]: Manager daemon np0005541913.mfesdm is now available Dec 2 04:57:15 localhost ceph-mon[288526]: removing stray HostCache host record np0005541910.localdomain.devices.0 Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/mirror_snapshot_schedule"} : dispatch Dec 2 04:57:15 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/trash_purge_schedule"} : dispatch Dec 2 04:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:57:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 5878 writes, 25K keys, 5878 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5878 writes, 789 syncs, 7.45 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 111 writes, 312 keys, 111 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 111 writes, 43 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:57:16 localhost podman[297191]: 2025-12-02 09:57:16.650168834 +0000 UTC m=+0.103122327 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=7, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Dec 2 04:57:16 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:16 localhost podman[297191]: 2025-12-02 09:57:16.736675093 +0000 UTC m=+0.189628566 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Dec 2 04:57:16 localhost ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Bus STARTING Dec 2 04:57:16 localhost ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Serving on http://172.18.0.107:8765 Dec 2 04:57:16 localhost ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Serving on https://172.18.0.107:7150 Dec 2 04:57:16 localhost ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Bus STARTED Dec 2 04:57:16 localhost ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Client ('172.18.0.107', 38106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:57:16 localhost ceph-mon[288526]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 2 04:57:16 localhost ceph-mon[288526]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 2 04:57:16 localhost ceph-mon[288526]: Cluster is now healthy Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:17 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:18 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:19 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:19 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:57:19 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:57:19 localhost ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:19 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:19 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:19 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:19 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:19 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:20 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:21 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:57:21 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2604409311' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:57:21 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:21 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:57:22 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:22 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:22 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:23 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:23 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:23 localhost ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:57:23 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:23 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:57:24 localhost podman[298143]: Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.549546087 +0000 UTC m=+0.072138343 container create a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7) Dec 2 04:57:24 localhost systemd[1]: Started libpod-conmon-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope. Dec 2 04:57:24 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:24 localhost systemd[1]: Started libcrun container. Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.526022809 +0000 UTC m=+0.048615065 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.6302799 +0000 UTC m=+0.152872156 container init a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218) Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.642477981 +0000 UTC m=+0.165070247 container start a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True) Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.642821792 +0000 UTC m=+0.165414098 container attach a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Dec 2 04:57:24 localhost affectionate_lichterman[298158]: 167 167 Dec 2 04:57:24 localhost systemd[1]: libpod-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope: Deactivated successfully. Dec 2 04:57:24 localhost podman[298143]: 2025-12-02 09:57:24.647281928 +0000 UTC m=+0.169874194 container died a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Dec 2 04:57:24 localhost podman[298163]: 2025-12-02 09:57:24.748947559 +0000 UTC m=+0.090125880 container remove a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Dec 2 04:57:24 localhost systemd[1]: libpod-conmon-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope: Deactivated successfully. Dec 2 04:57:24 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:24 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:24 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:57:24 localhost ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:57:24 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:25 localhost systemd[1]: var-lib-containers-storage-overlay-50568e2eff231372a4e20c095bbecd6298cd2aa0c01816189354fb3673909ed1-merged.mount: Deactivated successfully. Dec 2 04:57:25 localhost podman[298239]: Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.688661487 +0000 UTC m=+0.078081333 container create 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main) Dec 2 04:57:25 localhost systemd[1]: Started libpod-conmon-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope. Dec 2 04:57:25 localhost systemd[1]: Started libcrun container. Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.657498497 +0000 UTC m=+0.046918373 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.760492688 +0000 UTC m=+0.149912534 container init 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.771637998 +0000 UTC m=+0.161057844 container start 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.771948437 +0000 UTC m=+0.161368313 container attach 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4) Dec 2 04:57:25 localhost loving_wilson[298254]: 167 167 Dec 2 04:57:25 localhost systemd[1]: libpod-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope: Deactivated successfully. Dec 2 04:57:25 localhost podman[298239]: 2025-12-02 09:57:25.776799836 +0000 UTC m=+0.166219712 container died 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_BRANCH=main, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:57:25 localhost podman[298260]: 2025-12-02 09:57:25.893394733 +0000 UTC m=+0.103698914 container remove 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 2 04:57:25 localhost systemd[1]: libpod-conmon-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope: Deactivated successfully. Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:57:25 localhost ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:57:25 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:57:26 localhost podman[298303]: 2025-12-02 09:57:26.428639411 +0000 UTC m=+0.156153245 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:57:26 localhost podman[298301]: 2025-12-02 09:57:26.442311079 +0000 UTC m=+0.174063851 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:57:26 localhost podman[298301]: 2025-12-02 09:57:26.482994359 +0000 UTC m=+0.214747121 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:57:26 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:57:26 localhost podman[298303]: 2025-12-02 09:57:26.515180812 +0000 UTC m=+0.242694866 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:57:26 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:57:26 localhost podman[298304]: 2025-12-02 09:57:26.538656977 +0000 UTC m=+0.262400666 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:57:26 localhost systemd[1]: var-lib-containers-storage-overlay-491395cbb31bfe2335f113b7668e4179b94ab9be4c90a5d241bc3ab8198f285a-merged.mount: Deactivated successfully. Dec 2 04:57:26 localhost podman[298354]: 2025-12-02 09:57:26.505570958 +0000 UTC m=+0.091349597 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:57:26 localhost podman[298304]: 2025-12-02 09:57:26.579949047 +0000 UTC m=+0.303692706 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 04:57:26 localhost podman[298354]: 2025-12-02 09:57:26.591878211 +0000 UTC m=+0.177656800 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:57:26 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:57:26 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:57:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:26 localhost podman[298418]: Dec 2 04:57:26 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.821323941 +0000 UTC m=+0.084838329 container create 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218) Dec 2 04:57:26 localhost systemd[1]: Started libpod-conmon-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope. Dec 2 04:57:26 localhost systemd[1]: Started libcrun container. Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.784857628 +0000 UTC m=+0.048372006 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.895848074 +0000 UTC m=+0.159362462 container init 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z) Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.912558604 +0000 UTC m=+0.176072942 container start 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.912782721 +0000 UTC m=+0.176297099 container attach 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 2 04:57:26 localhost dazzling_easley[298433]: 167 167 Dec 2 04:57:26 localhost systemd[1]: libpod-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope: Deactivated successfully. Dec 2 04:57:26 localhost podman[298418]: 2025-12-02 09:57:26.918098373 +0000 UTC m=+0.181612721 container died 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:57:27 localhost podman[298438]: 2025-12-02 09:57:27.004351824 +0000 UTC m=+0.078834276 container remove 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:57:27 localhost systemd[1]: libpod-conmon-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope: Deactivated successfully. Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:27 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:57:27 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost systemd[1]: tmp-crun.dOENfJ.mount: Deactivated successfully. Dec 2 04:57:27 localhost systemd[1]: var-lib-containers-storage-overlay-5f7a1d9a97390e8108c64a4f2bad5f2486a107173b555df3b4a4370d313b1e2a-merged.mount: Deactivated successfully. Dec 2 04:57:28 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:28 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:28 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:28 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:29 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mon[288526]: mon.np0005541914@1(peon) e12 my rank is now 0 (was 1) Dec 2 04:57:29 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797080 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:29 localhost ceph-mon[288526]: paxos.0).electionLogic(46) init, last seen epoch 46 Dec 2 04:57:29 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 12 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:29.744140+0000 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914 Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912 Dec 2 04:57:29 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 2 04:57:29 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 15s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk Dec 2 04:57:30 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 2 04:57:30 localhost ceph-mon[288526]: Remove daemons mon.np0005541911 Dec 2 04:57:30 localhost ceph-mon[288526]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912'] (from ['np0005541914', 'np0005541912']) Dec 2 04:57:30 localhost ceph-mon[288526]: Removing monitor np0005541911 from monmap... Dec 2 04:57:30 localhost ceph-mon[288526]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:30 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:30 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:57:30 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e12 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e12 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e12 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(leader).monmap v12 adding/updating np0005541913 at [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to monitor cluster Dec 2 04:57:30 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987796f20 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Dec 2 04:57:30 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:30 localhost ceph-mon[288526]: paxos.0).electionLogic(48) init, last seen epoch 48 Dec 2 04:57:30 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:33 localhost podman[239757]: time="2025-12-02T09:57:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:57:33 localhost podman[239757]: @ - - [02/Dec/2025:09:57:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:57:33 localhost podman[239757]: @ - - [02/Dec/2025:09:57:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19179 "" "Go-http-client/1.1" Dec 2 04:57:35 localhost ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 13 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:30.836166+0000 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913 Dec 2 04:57:35 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 21s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005541914,np0005541912 (MON_DOWN) Dec 2 04:57:35 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:57:35 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:57:35 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:35 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(cluster) log [WRN] : mon.np0005541913 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Dec 2 04:57:35 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:36 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:36 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:36 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:36 localhost ceph-mon[288526]: Health check failed: 1/3 mons down, quorum np0005541914,np0005541912 (MON_DOWN) Dec 2 04:57:36 localhost ceph-mon[288526]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:36 localhost ceph-mon[288526]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541913 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Dec 2 04:57:36 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:36 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:37 localhost ceph-mon[288526]: Deploying daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:37 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: Removed label mon from host np0005541911.localdomain Dec 2 04:57:37 localhost podman[298791]: 2025-12-02 09:57:37.094614624 +0000 UTC m=+0.096811574 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:57:37 localhost podman[298791]: 2025-12-02 09:57:37.109840909 +0000 UTC m=+0.112037889 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:57:37 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:57:37 localhost systemd[1]: tmp-crun.WRieZV.mount: Deactivated successfully. Dec 2 04:57:37 localhost podman[298792]: 2025-12-02 09:57:37.192143439 +0000 UTC m=+0.192988108 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:57:37 localhost podman[298792]: 2025-12-02 09:57:37.209090446 +0000 UTC m=+0.209935145 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:57:37 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:57:37 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:37 localhost ceph-mon[288526]: paxos.0).electionLogic(50) init, last seen epoch 50 Dec 2 04:57:37 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 13 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:30.836166+0000 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912 Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913 Dec 2 04:57:37 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 23s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912) Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : Cluster is now healthy Dec 2 04:57:37 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:57:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:57:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:57:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:38 localhost ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912) Dec 2 04:57:38 localhost ceph-mon[288526]: Cluster is now healthy Dec 2 04:57:38 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:57:38 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:57:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:38 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mon[288526]: mon.np0005541914@0(leader).monmap v13 adding/updating np0005541911 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Dec 2 04:57:39 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x5619877971e0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Dec 2 04:57:39 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:39 localhost ceph-mon[288526]: paxos.0).electionLogic(52) init, last seen epoch 52 Dec 2 04:57:39 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:57:42 localhost podman[298853]: 2025-12-02 09:57:42.070858183 +0000 UTC m=+0.071545544 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:57:42 localhost podman[298853]: 2025-12-02 09:57:42.082670143 +0000 UTC m=+0.083357454 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 04:57:42 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:57:42 localhost openstack_network_exporter[241816]: ERROR 09:57:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:57:42 localhost openstack_network_exporter[241816]: ERROR 09:57:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:42 localhost openstack_network_exporter[241816]: ERROR 09:57:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:42 localhost openstack_network_exporter[241816]: ERROR 09:57:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:57:42 localhost openstack_network_exporter[241816]: Dec 2 04:57:42 localhost openstack_network_exporter[241816]: ERROR 09:57:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:57:42 localhost openstack_network_exporter[241816]: Dec 2 04:57:43 localhost ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors Dec 2 04:57:44 localhost ceph-mon[288526]: paxos.0).electionLogic(53) init, last seen epoch 53, mid-election, bumping Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 14 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:38.994501+0000 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913 Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911 Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 29s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: Removed label _admin from host np0005541911.localdomain Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541911 calling monitor election Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3) Dec 2 04:57:44 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:57:44 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:57:44 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:45 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:45 localhost ceph-mon[288526]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:45 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[288526]: Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:45 localhost ceph-mon[288526]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:45 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:57:45 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:47 localhost ceph-mon[288526]: Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765] Dec 2 04:57:47 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"} v 0) Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"} : dispatch Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"}]': finished Dec 2 04:57:47 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:47 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:47 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command({"prefix": "mon rm", "name": "np0005541911"} v 0) Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541911"} : dispatch Dec 2 04:57:47 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Dec 2 04:57:47 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:47 localhost ceph-mon[288526]: paxos.0).electionLogic(56) init, last seen epoch 56 Dec 2 04:57:47 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:49 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:50 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:50 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:51 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:51 localhost nova_compute[281045]: 2025-12-02 09:57:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:51 localhost nova_compute[281045]: 2025-12-02 09:57:51.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:57:51 localhost nova_compute[281045]: 2025-12-02 09:57:51.664 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:57:52 localhost nova_compute[281045]: 2025-12-02 09:57:52.659 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:57:52 localhost ceph-mon[288526]: paxos.0).electionLogic(59) init, last seen epoch 59, mid-election, bumping Dec 2 04:57:52 localhost ceph-mon[288526]: mon.np0005541914@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 15 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:47.906570+0000 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912 Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913 Dec 2 04:57:52 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 38s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk Dec 2 04:57:52 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:57:52 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:52 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:52 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:57:53 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mon[288526]: Removing key for mgr.np0005541911.adcgiw Dec 2 04:57:53 localhost ceph-mon[288526]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913']) Dec 2 04:57:53 localhost ceph-mon[288526]: Removing monitor np0005541911 from monmap... Dec 2 04:57:53 localhost ceph-mon[288526]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541913 calling monitor election Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541912 calling monitor election Dec 2 04:57:53 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914 calling monitor election Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:53 localhost ceph-mon[288526]: overall HEALTH_OK Dec 2 04:57:53 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:53 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:53 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost nova_compute[281045]: 2025-12-02 09:57:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:53 localhost nova_compute[281045]: 2025-12-02 09:57:53.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:53 localhost nova_compute[281045]: 2025-12-02 09:57:53.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:57:54 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[288526]: Added label _no_schedule to host np0005541911.localdomain Dec 2 04:57:54 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.541 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.568 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.568 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.569 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.569 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.584 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.584 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:57:54 localhost nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:57:54 localhost ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:54 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:57:54 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:57:54 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:57:54 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3063913601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.014 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:57:55 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.185 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.187 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11984MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.187 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.188 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.282 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.283 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.331 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.380 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.381 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.396 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.419 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.440 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:57:55 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3093024393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.899 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:57:55 localhost nova_compute[281045]: 2025-12-02 09:57:55.903 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:57:55 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} v 0) Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:57:55 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost nova_compute[281045]: 2025-12-02 09:57:56.352 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:57:56 localhost nova_compute[281045]: 2025-12-02 09:57:56.355 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:57:56 localhost nova_compute[281045]: 2025-12-02 09:57:56.355 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:56 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:57:56 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:57:57 localhost podman[299591]: 2025-12-02 09:57:57.092565942 +0000 UTC m=+0.091938186 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:57:57 localhost ceph-mon[288526]: Removed host np0005541911.localdomain Dec 2 04:57:57 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:57:57 localhost ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:57:57 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:57:57 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:57:57 localhost systemd[1]: tmp-crun.XNxMko.mount: Deactivated successfully. Dec 2 04:57:57 localhost podman[299594]: 2025-12-02 09:57:57.140832164 +0000 UTC m=+0.134812584 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 2 04:57:57 localhost podman[299592]: 2025-12-02 09:57:57.192725677 +0000 UTC m=+0.190328457 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:57:57 localhost podman[299592]: 2025-12-02 09:57:57.203993611 +0000 UTC m=+0.201596481 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:57:57 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:57:57 localhost podman[299594]: 2025-12-02 09:57:57.225746844 +0000 UTC m=+0.219727234 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:57:57 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:57 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:57:57 localhost podman[299593]: 2025-12-02 09:57:57.250635493 +0000 UTC m=+0.245161929 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:57:57 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:57 localhost podman[299591]: 2025-12-02 09:57:57.273763209 +0000 UTC m=+0.273135393 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 04:57:57 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:57:57 localhost podman[299593]: 2025-12-02 09:57:57.284130985 +0000 UTC m=+0.278657401 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:57:57 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:57:57 localhost nova_compute[281045]: 2025-12-02 09:57:57.314 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:57 localhost nova_compute[281045]: 2025-12-02 09:57:57.368 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:57 localhost nova_compute[281045]: 2025-12-02 09:57:57.369 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:57 localhost nova_compute[281045]: 2025-12-02 09:57:57.370 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:57 localhost nova_compute[281045]: 2025-12-02 09:57:57.370 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:57:57 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:57:58 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost sshd[299674]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:57:58 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:58 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 04:57:58 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:58 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 04:57:58 localhost systemd-logind[760]: New session 68 of user tripleo-admin. Dec 2 04:57:58 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:57:58 localhost ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:57:58 localhost ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:57:58 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 04:57:58 localhost systemd[299678]: Queued start job for default target Main User Target. Dec 2 04:57:58 localhost systemd[299678]: Created slice User Application Slice. Dec 2 04:57:58 localhost systemd[299678]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:57:58 localhost systemd[299678]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:57:58 localhost systemd[299678]: Reached target Paths. Dec 2 04:57:58 localhost systemd[299678]: Reached target Timers. Dec 2 04:57:58 localhost systemd[299678]: Starting D-Bus User Message Bus Socket... Dec 2 04:57:58 localhost systemd[299678]: Starting Create User's Volatile Files and Directories... Dec 2 04:57:58 localhost systemd[299678]: Finished Create User's Volatile Files and Directories. Dec 2 04:57:58 localhost systemd[299678]: Listening on D-Bus User Message Bus Socket. Dec 2 04:57:58 localhost systemd[299678]: Reached target Sockets. Dec 2 04:57:58 localhost systemd[299678]: Reached target Basic System. Dec 2 04:57:58 localhost systemd[299678]: Reached target Main User Target. Dec 2 04:57:58 localhost systemd[299678]: Startup finished in 119ms. Dec 2 04:57:58 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 04:57:58 localhost systemd[1]: Started Session 68 of User tripleo-admin. Dec 2 04:57:58 localhost nova_compute[281045]: 2025-12-02 09:57:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:59 localhost python3[299820]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:57:59 localhost ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:57:59 localhost ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:57:59 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:57:59 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:59 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:57:59 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:59 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:57:59 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:57:59 localhost ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:59 localhost python3[299966]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:58:00 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:00 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:00 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:00 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:00 localhost python3[300111]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:58:01 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:01 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:01 localhost ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:01 localhost ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:01 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:01 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:02 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:02 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:02 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:02 localhost ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:58:02 localhost ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:02 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:58:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:58:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:58:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:03 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:03 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:03 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:03 localhost ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:03 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:03 localhost podman[239757]: time="2025-12-02T09:58:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:58:03 localhost podman[239757]: @ - - [02/Dec/2025:09:58:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:58:03 localhost podman[239757]: @ - - [02/Dec/2025:09:58:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1" Dec 2 04:58:03 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:58:03 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:04 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:04 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:04 localhost ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:04 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:04 localhost ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.637904) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484637958, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2147, "num_deletes": 264, "total_data_size": 5523359, "memory_usage": 5751592, "flush_reason": "Manual Compaction"} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484654987, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3554573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19379, "largest_seqno": 21520, "table_properties": {"data_size": 3546188, "index_size": 4634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23810, "raw_average_key_size": 22, "raw_value_size": 3526765, "raw_average_value_size": 3314, "num_data_blocks": 194, "num_entries": 1064, "num_filter_entries": 1064, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669425, "oldest_key_time": 1764669425, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17157 microseconds, and 5555 cpu microseconds. Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.655056) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3554573 bytes OK Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.655088) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657079) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657108) EVENT_LOG_v1 {"time_micros": 1764669484657101, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5512842, prev total WAL file size 5512842, number of live WAL files 2. Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.658486) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323839' seq:72057594037927935, type:22 .. '6B760031353530' seq:0, type:0; will stop at (end) Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3471KB)], [30(15MB)] Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484658553, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19657527, "oldest_snapshot_seqno": -1} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11403 keys, 18677973 bytes, temperature: kUnknown Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484744536, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 18677973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18609801, "index_size": 38567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 305959, "raw_average_key_size": 26, "raw_value_size": 18411849, "raw_average_value_size": 1614, "num_data_blocks": 1472, "num_entries": 11403, "num_filter_entries": 11403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.744957) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 18677973 bytes Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.747546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.4 rd, 217.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 15.4 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(10.8) write-amplify(5.3) OK, records in: 11913, records dropped: 510 output_compression: NoCompression Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.747589) EVENT_LOG_v1 {"time_micros": 1764669484747569, "job": 16, "event": "compaction_finished", "compaction_time_micros": 86079, "compaction_time_cpu_micros": 28058, "output_level": 6, "num_output_files": 1, "total_output_size": 18677973, "num_input_records": 11913, "num_output_records": 11403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484748240, "job": 16, "event": "table_file_deletion", "file_number": 32} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484750425, "job": 16, "event": "table_file_deletion", "file_number": 30} Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.658358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:05 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:05 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:05 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:58:05 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:05 localhost ceph-mon[288526]: Saving service mon spec with placement label:mon Dec 2 04:58:05 localhost ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:05 localhost ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:05 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:05 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:06 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:06 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:06 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:06 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:06 localhost ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:06 localhost ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:06 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:06 localhost ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:06 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "mon rm", "name": "np0005541914"} v 0) Dec 2 04:58:06 localhost ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541914"} : dispatch Dec 2 04:58:06 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797080 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Dec 2 04:58:06 localhost ceph-mon[288526]: mon.np0005541914@0(leader) e16 removed from monmap, suicide. Dec 2 04:58:06 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x56199114a000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Dec 2 04:58:07 localhost podman[300146]: 2025-12-02 09:58:07.023865463 +0000 UTC m=+0.057637111 container died 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main) Dec 2 04:58:07 localhost systemd[1]: var-lib-containers-storage-overlay-ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b-merged.mount: Deactivated successfully. Dec 2 04:58:07 localhost podman[300146]: 2025-12-02 09:58:07.057236561 +0000 UTC m=+0.091008129 container remove 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:58:07 localhost podman[300236]: 2025-12-02 09:58:07.670546411 +0000 UTC m=+0.138564179 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:58:07 localhost podman[300236]: 2025-12-02 09:58:07.721783793 +0000 UTC m=+0.189801481 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:58:07 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:58:07 localhost podman[300237]: 2025-12-02 09:58:07.628669043 +0000 UTC m=+0.091063969 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Dec 2 04:58:07 localhost podman[300237]: 2025-12-02 09:58:07.810574342 +0000 UTC m=+0.272969318 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-type=git) Dec 2 04:58:07 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:58:07 localhost systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541914.service: Deactivated successfully. Dec 2 04:58:07 localhost systemd[1]: Stopped Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:58:07 localhost systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541914.service: Consumed 11.455s CPU time. Dec 2 04:58:07 localhost systemd[1]: Reloading. Dec 2 04:58:08 localhost systemd-rc-local-generator[300385]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:58:08 localhost systemd-sysv-generator[300390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:08 localhost podman[300409]: Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.397752695 +0000 UTC m=+0.064439407 container create 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:58:08 localhost systemd[1]: Started libpod-conmon-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope. Dec 2 04:58:08 localhost systemd[1]: Started libcrun container. Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.453544187 +0000 UTC m=+0.120230929 container init 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:08 localhost systemd[1]: tmp-crun.COJN1E.mount: Deactivated successfully. Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.470068211 +0000 UTC m=+0.136754943 container start 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.471525996 +0000 UTC m=+0.138212788 container attach 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:58:08 localhost happy_dirac[300424]: 167 167 Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.474305141 +0000 UTC m=+0.140991853 container died 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:08 localhost systemd[1]: libpod-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope: Deactivated successfully. Dec 2 04:58:08 localhost podman[300409]: 2025-12-02 09:58:08.376341172 +0000 UTC m=+0.043027974 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:08 localhost podman[300429]: 2025-12-02 09:58:08.549946058 +0000 UTC m=+0.067197111 container remove 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, name=rhceph, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vendor=Red Hat, Inc.) Dec 2 04:58:08 localhost systemd[1]: libpod-conmon-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope: Deactivated successfully. Dec 2 04:58:09 localhost podman[300496]: Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.231579202 +0000 UTC m=+0.076762182 container create 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True) Dec 2 04:58:09 localhost systemd[1]: Started libpod-conmon-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope. Dec 2 04:58:09 localhost systemd[1]: Started libcrun container. Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.200730341 +0000 UTC m=+0.045913371 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.301259658 +0000 UTC m=+0.146442648 container init 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.30857161 +0000 UTC m=+0.153754580 container start 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7) Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.308828699 +0000 UTC m=+0.154011679 container attach 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Dec 2 04:58:09 localhost adoring_pare[300512]: 167 167 Dec 2 04:58:09 localhost systemd[1]: libpod-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope: Deactivated successfully. Dec 2 04:58:09 localhost podman[300496]: 2025-12-02 09:58:09.312577573 +0000 UTC m=+0.157760543 container died 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Dec 2 04:58:09 localhost systemd[1]: var-lib-containers-storage-overlay-c6c345eef9df37ff8200d98843a81e91c0205ec5ea28af1b29d99d20ce571d81-merged.mount: Deactivated successfully. Dec 2 04:58:09 localhost systemd[1]: var-lib-containers-storage-overlay-1720ab8a23b42af740e3fb763e709a989ecaa8641ba4c6dabd496829b3d241d4-merged.mount: Deactivated successfully. Dec 2 04:58:09 localhost podman[300517]: 2025-12-02 09:58:09.420428623 +0000 UTC m=+0.092029608 container remove 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, release=1763362218) Dec 2 04:58:09 localhost systemd[1]: libpod-conmon-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope: Deactivated successfully. Dec 2 04:58:10 localhost podman[300592]: Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.246285587 +0000 UTC m=+0.066462728 container create 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:10 localhost systemd[1]: Started libpod-conmon-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope. Dec 2 04:58:10 localhost systemd[1]: Started libcrun container. Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.305681639 +0000 UTC m=+0.125858760 container init 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.41.4) Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.213810917 +0000 UTC m=+0.033988098 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:10 localhost nervous_jones[300607]: 167 167 Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.322561224 +0000 UTC m=+0.142738365 container start 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.322897015 +0000 UTC m=+0.143074156 container attach 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=) Dec 2 04:58:10 localhost systemd[1]: libpod-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope: Deactivated successfully. Dec 2 04:58:10 localhost podman[300592]: 2025-12-02 09:58:10.325003289 +0000 UTC m=+0.145180420 container died 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:10 localhost podman[300612]: 2025-12-02 09:58:10.400673337 +0000 UTC m=+0.065708135 container remove 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:10 localhost systemd[1]: tmp-crun.s0RQ8C.mount: Deactivated successfully. Dec 2 04:58:10 localhost systemd[1]: var-lib-containers-storage-overlay-cfd1042074871156c75482c47c7c0bf8e1b46009482b3a3d376235193c802f55-merged.mount: Deactivated successfully. Dec 2 04:58:10 localhost systemd[1]: libpod-conmon-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope: Deactivated successfully. Dec 2 04:58:11 localhost podman[300689]: Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.102760596 +0000 UTC m=+0.074880286 container create 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc.) Dec 2 04:58:11 localhost systemd[1]: Started libpod-conmon-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope. Dec 2 04:58:11 localhost systemd[1]: Started libcrun container. Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.153351769 +0000 UTC m=+0.125471439 container init 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7) Dec 2 04:58:11 localhost elegant_benz[300704]: 167 167 Dec 2 04:58:11 localhost systemd[1]: libpod-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope: Deactivated successfully. Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.063599151 +0000 UTC m=+0.035718851 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.16616041 +0000 UTC m=+0.138280100 container start 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.166363206 +0000 UTC m=+0.138482896 container attach 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:11 localhost podman[300689]: 2025-12-02 09:58:11.168865263 +0000 UTC m=+0.140985023 container died 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 2 04:58:11 localhost podman[300709]: 2025-12-02 09:58:11.233640469 +0000 UTC m=+0.061132306 container remove 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=) Dec 2 04:58:11 localhost systemd[1]: libpod-conmon-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope: Deactivated successfully. Dec 2 04:58:11 localhost systemd[1]: var-lib-containers-storage-overlay-0820f6c8f74c2f69aa9f87491ba9d604307d08116fb6005c0bb84b005f425c25-merged.mount: Deactivated successfully. Dec 2 04:58:11 localhost podman[300777]: Dec 2 04:58:11 localhost podman[300777]: 2025-12-02 09:58:11.938286164 +0000 UTC m=+0.078263298 container create 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 2 04:58:11 localhost systemd[1]: Started libpod-conmon-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope. Dec 2 04:58:12 localhost systemd[1]: Started libcrun container. Dec 2 04:58:12 localhost podman[300777]: 2025-12-02 09:58:11.908585419 +0000 UTC m=+0.048562493 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:12 localhost podman[300777]: 2025-12-02 09:58:12.014488899 +0000 UTC m=+0.154465983 container init 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 2 04:58:12 localhost romantic_zhukovsky[300791]: 167 167 Dec 2 04:58:12 localhost podman[300777]: 2025-12-02 09:58:12.022192574 +0000 UTC m=+0.162169648 container start 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, architecture=x86_64, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container) Dec 2 04:58:12 localhost podman[300777]: 2025-12-02 09:58:12.022590676 +0000 UTC m=+0.162567760 container attach 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, ceph=True, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:58:12 localhost systemd[1]: libpod-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope: Deactivated successfully. Dec 2 04:58:12 localhost podman[300777]: 2025-12-02 09:58:12.024627289 +0000 UTC m=+0.164604383 container died 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, CEPH_POINT_RELEASE=, release=1763362218, vcs-type=git, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:12 localhost podman[300796]: 2025-12-02 09:58:12.092382795 +0000 UTC m=+0.062774235 container remove 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7) Dec 2 04:58:12 localhost systemd[1]: libpod-conmon-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope: Deactivated successfully. Dec 2 04:58:12 localhost openstack_network_exporter[241816]: ERROR 09:58:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:12 localhost openstack_network_exporter[241816]: ERROR 09:58:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:12 localhost openstack_network_exporter[241816]: ERROR 09:58:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:58:12 localhost openstack_network_exporter[241816]: ERROR 09:58:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:58:12 localhost openstack_network_exporter[241816]: Dec 2 04:58:12 localhost openstack_network_exporter[241816]: ERROR 09:58:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:58:12 localhost openstack_network_exporter[241816]: Dec 2 04:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:58:12 localhost podman[300813]: 2025-12-02 09:58:12.171504849 +0000 UTC m=+0.053178343 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:58:12 localhost podman[300813]: 2025-12-02 09:58:12.208726535 +0000 UTC m=+0.090399989 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:58:12 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:58:12 localhost systemd[1]: var-lib-containers-storage-overlay-03b615f3eb610edb6066fc837da58cf389fc7c49d10aaf553d448c72d847ebc5-merged.mount: Deactivated successfully. Dec 2 04:58:13 localhost podman[300943]: 2025-12-02 09:58:13.133636241 +0000 UTC m=+0.093977039 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:13 localhost podman[300943]: 2025-12-02 09:58:13.240899523 +0000 UTC m=+0.201240321 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:20 localhost podman[301465]: Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.526327767 +0000 UTC m=+0.078914598 container create 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64) Dec 2 04:58:20 localhost systemd[1]: Started libpod-conmon-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope. Dec 2 04:58:20 localhost systemd[1]: Started libcrun container. Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.49364346 +0000 UTC m=+0.046230331 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.597313813 +0000 UTC m=+0.149900624 container init 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, architecture=x86_64, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container) Dec 2 04:58:20 localhost cranky_northcutt[301480]: 167 167 Dec 2 04:58:20 localhost systemd[1]: libpod-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope: Deactivated successfully. Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.608615197 +0000 UTC m=+0.161202038 container start 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.609713741 +0000 UTC m=+0.162300692 container attach 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:20 localhost podman[301465]: 2025-12-02 09:58:20.612767895 +0000 UTC m=+0.165354756 container died 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, ceph=True, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:20 localhost podman[301486]: 2025-12-02 09:58:20.714407595 +0000 UTC m=+0.097240328 container remove 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:58:20 localhost systemd[1]: libpod-conmon-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope: Deactivated successfully. Dec 2 04:58:20 localhost podman[301502]: Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.831111165 +0000 UTC m=+0.077827925 container create 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:20 localhost systemd[1]: Started libpod-conmon-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope. Dec 2 04:58:20 localhost systemd[1]: Started libcrun container. Dec 2 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.893089556 +0000 UTC m=+0.139806306 container init 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.799721347 +0000 UTC m=+0.046438177 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.901559404 +0000 UTC m=+0.148276204 container start 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.901822322 +0000 UTC m=+0.148539072 container attach 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, version=7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:20 localhost systemd[1]: libpod-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope: Deactivated successfully. Dec 2 04:58:20 localhost podman[301502]: 2025-12-02 09:58:20.981893124 +0000 UTC m=+0.228609884 container died 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container) Dec 2 04:58:21 localhost podman[301543]: 2025-12-02 09:58:21.069791687 +0000 UTC m=+0.078989621 container remove 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:21 localhost systemd[1]: libpod-conmon-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope: Deactivated successfully. Dec 2 04:58:21 localhost sshd[301559]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:58:21 localhost systemd[1]: Reloading. Dec 2 04:58:21 localhost systemd-sysv-generator[301587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:58:21 localhost systemd-rc-local-generator[301583]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: var-lib-containers-storage-overlay-e59137448da341899478bbf4dcf4a802f4eda5b079ce78d7c8357844800a439d-merged.mount: Deactivated successfully. Dec 2 04:58:21 localhost systemd[1]: Reloading. Dec 2 04:58:21 localhost systemd-rc-local-generator[301626]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:58:21 localhost systemd-sysv-generator[301632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:58:21 localhost systemd[1]: Starting Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:58:22 localhost podman[301691]: Dec 2 04:58:22 localhost podman[301691]: 2025-12-02 09:58:22.209138524 +0000 UTC m=+0.074261507 container create a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, RELEASE=main, io.buildah.version=1.41.4) Dec 2 04:58:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff) Dec 2 04:58:22 localhost podman[301691]: 2025-12-02 09:58:22.175338792 +0000 UTC m=+0.040461795 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:22 localhost podman[301691]: 2025-12-02 09:58:22.27523107 +0000 UTC m=+0.140354033 container init a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:22 localhost podman[301691]: 2025-12-02 09:58:22.283848553 +0000 UTC m=+0.148971516 container start a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 04:58:22 localhost bash[301691]: a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 Dec 2 04:58:22 localhost systemd[1]: Started Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:58:22 localhost ceph-mon[301710]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:58:22 localhost ceph-mon[301710]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 2 04:58:22 localhost ceph-mon[301710]: pidfile_write: ignore empty --pid-file Dec 2 04:58:22 localhost ceph-mon[301710]: load: jerasure load: lrc Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: RocksDB version: 7.9.2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Git sha 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: DB SUMMARY Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: DB Session ID: O7EMRIXC8F5M1Z077C5B Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: CURRENT file: CURRENT Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: IDENTITY file: IDENTITY Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541914/store.db dir, Total Num: 0, files: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541914/store.db: 000004.log size: 636 ; Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.error_if_exists: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.create_if_missing: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.paranoid_checks: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.env: 0x562ea16f79e0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.fs: PosixFileSystem Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.info_log: 0x562ea3be2d20 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.statistics: (nil) Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.use_fsync: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_log_file_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_fallocate: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.use_direct_reads: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.create_missing_column_families: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.db_log_dir: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.wal_dir: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.advise_random_on_open: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.write_buffer_manager: 0x562ea3bf3540 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.rate_limiter: (nil) Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.unordered_write: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.row_cache: None Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.wal_filter: None Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.two_write_queues: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.manual_wal_flush: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.wal_compression: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.atomic_flush: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.log_readahead_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.db_host_id: __hostname__ Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_background_jobs: 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_background_compactions: -1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_subcompactions: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_total_wal_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_open_files: -1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bytes_per_sync: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_readahead_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_background_flushes: -1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Compression algorithms supported: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kZSTD supported: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kXpressCompression supported: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kZlibCompression supported: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.merge_operator: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_filter: None Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_filter_factory: None Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.sst_partitioner_factory: None Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562ea3be2980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562ea3bdf1f0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.write_buffer_size: 33554432 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_write_buffer_number: 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression: NoCompression Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression: Disabled Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.prefix_extractor: nullptr Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.num_levels: 7 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.level: 32767 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.enabled: false Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.arena_block_size: 1048576 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.table_properties_collectors: Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.inplace_update_support: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.bloom_locality: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.max_successive_merges: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.force_consistency_checks: 1 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.ttl: 2592000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enable_blob_files: false Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.min_blob_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_file_size: 268435456 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2a601a42-6d19-4945-9484-73e64f055198 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502334943, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502337426, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502337564, "job": 1, "event": "recovery_finished"} Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562ea3c06e00 Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: DB pointer 0x562ea3cfc000 Dec 2 04:58:22 localhost ceph-mon[301710]: mon.np0005541914 does not exist in monmap, will attempt to join an existing cluster Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:58:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.72 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.72 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 04:58:22 localhost ceph-mon[301710]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] Dec 2 04:58:22 localhost ceph-mon[301710]: starting mon.np0005541914 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541914 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:58:22 localhost ceph-mon[301710]: mon.np0005541914@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing) e16 sync_obtain_latest_monmap Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing) e16 sync_obtain_latest_monmap obtained monmap e16 Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).mds e16 new map Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T08:05:53.424954+0000#012modified#0112025-12-02T09:52:13.505190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26573}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26573 members: 26573#012[mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}] Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 3314933000852226048, adjusting msgr requires Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:58:24 localhost ceph-mon[301710]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912) Dec 2 04:58:24 localhost ceph-mon[301710]: Cluster is now healthy Dec 2 04:58:24 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Removed label _admin from host np0005541911.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541911 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3) Dec 2 04:58:24 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:24 localhost ceph-mon[301710]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765] Dec 2 04:58:24 localhost ceph-mon[301710]: Removing key for mgr.np0005541911.adcgiw Dec 2 04:58:24 localhost ceph-mon[301710]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913']) Dec 2 04:58:24 localhost ceph-mon[301710]: Removing monitor np0005541911 from monmap... Dec 2 04:58:24 localhost ceph-mon[301710]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:58:24 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Added label _no_schedule to host np0005541911.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Removed host np0005541911.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Saving service mon spec with placement label:mon Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: Remove daemons mon.np0005541914 Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541914"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Safe to remove mon.np0005541914: new quorum should be ['np0005541912', 'np0005541913'] (from ['np0005541912', 'np0005541913']) Dec 2 04:58:24 localhost ceph-mon[301710]: Removing monitor np0005541914 from monmap... Dec 2 04:58:24 localhost ceph-mon[301710]: Removing daemon mon.np0005541914 from np0005541914.localdomain -- ports [] Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541912 is new leader, mons np0005541912,np0005541913 in quorum (ranks 0,1) Dec 2 04:58:24 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[301710]: mon.np0005541914@-1(synchronizing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 3 Dec 2 04:58:25 localhost podman[301858]: 2025-12-02 09:58:25.823691252 +0000 UTC m=+0.101310352 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:25 localhost podman[301858]: 2025-12-02 09:58:25.922101214 +0000 UTC m=+0.199720364 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, release=1763362218, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 2 04:58:27 localhost ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x5619910b8000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Dec 2 04:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:58:27 localhost systemd[1]: tmp-crun.QBiGQK.mount: Deactivated successfully. Dec 2 04:58:27 localhost podman[302061]: 2025-12-02 09:58:27.761740775 +0000 UTC m=+0.107235123 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:58:27 localhost podman[302061]: 2025-12-02 09:58:27.795965519 +0000 UTC m=+0.141459817 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:58:27 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:58:27 localhost podman[302063]: 2025-12-02 09:58:27.851432311 +0000 UTC m=+0.190215114 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 04:58:27 localhost podman[302062]: 2025-12-02 09:58:27.88221686 +0000 UTC m=+0.224822940 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:58:27 localhost podman[302063]: 2025-12-02 09:58:27.886926733 +0000 UTC m=+0.225709606 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:58:27 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:58:27 localhost podman[302067]: 2025-12-02 09:58:27.812923136 +0000 UTC m=+0.147944625 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:58:27 localhost podman[302062]: 2025-12-02 09:58:27.917914318 +0000 UTC m=+0.260520438 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:58:27 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:58:27 localhost podman[302067]: 2025-12-02 09:58:27.943254321 +0000 UTC m=+0.278275840 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller) Dec 2 04:58:27 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:58:28 localhost ceph-mon[301710]: mon.np0005541914@-1(probing) e17 my rank is now 2 (was -1) Dec 2 04:58:28 localhost systemd[1]: tmp-crun.pqstXE.mount: Deactivated successfully. Dec 2 04:58:28 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:28 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:28 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:28 localhost ceph-mon[301710]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election Dec 2 04:58:28 localhost ceph-mon[301710]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Dec 2 04:58:28 localhost ceph-mon[301710]: mon.np0005541914@2(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541914@2(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[301710]: mgrc update_daemon_metadata mon.np0005541914 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541914.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541914.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541912 calling monitor election Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541913 calling monitor election Dec 2 04:58:32 localhost ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:32 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541914 calling monitor election Dec 2 04:58:32 localhost ceph-mon[301710]: mon.np0005541912 is new leader, mons np0005541912,np0005541913,np0005541914 in quorum (ranks 0,1,2) Dec 2 04:58:32 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:32 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 04:58:32 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id Dec 2 04:58:33 localhost podman[239757]: time="2025-12-02T09:58:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:58:33 localhost podman[239757]: @ - - [02/Dec/2025:09:58:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:58:33 localhost podman[239757]: @ - - [02/Dec/2025:09:58:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19178 "" "Go-http-client/1.1" Dec 2 04:58:33 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id Dec 2 04:58:34 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:34 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:34 localhost ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:34 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:34 localhost ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id Dec 2 04:58:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id Dec 2 04:58:36 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:36 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:36 localhost ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:36 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id Dec 2 04:58:37 localhost nova_compute[281045]: 2025-12-02 09:58:37.047 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:58:38 localhost systemd[1]: tmp-crun.x5inZs.mount: Deactivated successfully. Dec 2 04:58:38 localhost podman[302145]: 2025-12-02 09:58:38.118948477 +0000 UTC m=+0.124729137 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:58:38 localhost podman[302145]: 2025-12-02 09:58:38.127086965 +0000 UTC m=+0.132867625 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:58:38 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:58:38 localhost podman[302146]: 2025-12-02 09:58:38.130808478 +0000 UTC m=+0.133140893 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350) Dec 2 04:58:38 localhost podman[302146]: 2025-12-02 09:58:38.213974896 +0000 UTC m=+0.216307301 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:58:38 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:58:38 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:38 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:39 localhost ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:39 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:39 localhost ceph-mon[301710]: Reconfig service osd.default_drive_group Dec 2 04:58:39 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:39 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:39 localhost ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e89 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 2 04:58:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e89 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 2 04:58:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 e90: 6 total, 6 up, 6 in Dec 2 04:58:40 localhost systemd[1]: session-66.scope: Deactivated successfully. Dec 2 04:58:40 localhost systemd[1]: session-66.scope: Consumed 23.312s CPU time. Dec 2 04:58:40 localhost systemd-logind[760]: Session 66 logged out. Waiting for processes to exit. Dec 2 04:58:40 localhost systemd-logind[760]: Removed session 66. Dec 2 04:58:40 localhost sshd[302187]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:58:40 localhost systemd-logind[760]: New session 70 of user ceph-admin. Dec 2 04:58:40 localhost systemd[1]: Started Session 70 of User ceph-admin. Dec 2 04:58:40 localhost ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:40 localhost ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:40 localhost ceph-mon[301710]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:58:40 localhost ceph-mon[301710]: Activating manager daemon np0005541912.qwddia Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:40 localhost ceph-mon[301710]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:40 localhost ceph-mon[301710]: Manager daemon np0005541912.qwddia is now available Dec 2 04:58:40 localhost ceph-mon[301710]: removing stray HostCache host record np0005541911.localdomain.devices.0 Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/mirror_snapshot_schedule"} : dispatch Dec 2 04:58:40 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/trash_purge_schedule"} : dispatch Dec 2 04:58:41 localhost podman[302297]: 2025-12-02 09:58:41.429341105 +0000 UTC m=+0.091235304 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Dec 2 04:58:41 localhost podman[302297]: 2025-12-02 09:58:41.531974626 +0000 UTC m=+0.193868775 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64) Dec 2 04:58:42 localhost openstack_network_exporter[241816]: ERROR 09:58:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:58:42 localhost openstack_network_exporter[241816]: ERROR 09:58:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:42 localhost openstack_network_exporter[241816]: ERROR 09:58:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:42 localhost openstack_network_exporter[241816]: ERROR 09:58:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:58:42 localhost openstack_network_exporter[241816]: Dec 2 04:58:42 localhost openstack_network_exporter[241816]: ERROR 09:58:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:58:42 localhost openstack_network_exporter[241816]: Dec 2 04:58:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:58:42 localhost podman[302432]: 2025-12-02 09:58:42.349804435 +0000 UTC m=+0.095645678 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:58:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1019530109 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:42 localhost podman[302432]: 2025-12-02 09:58:42.361137851 +0000 UTC m=+0.106979084 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:58:42 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:58:43 localhost ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Bus STARTING Dec 2 04:58:43 localhost ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Serving on http://172.18.0.106:8765 Dec 2 04:58:43 localhost ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Serving on https://172.18.0.106:7150 Dec 2 04:58:43 localhost ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Bus STARTED Dec 2 04:58:43 localhost ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Client ('172.18.0.106', 43976) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:44 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:44 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:44 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:45 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020041399 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:47 localhost ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:47 localhost ceph-mon[301710]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 2 04:58:47 localhost ceph-mon[301710]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 2 04:58:47 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:48 localhost ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:48 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:49 localhost ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:49 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:51 localhost ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:51 localhost ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:52 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:52 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:52 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:52 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:52 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054389 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:53 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:53 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:53 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:53 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:53 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:53 localhost podman[303267]: Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.333859073 +0000 UTC m=+0.071809263 container create aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, distribution-scope=public, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:53 localhost systemd[1]: Started libpod-conmon-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope. Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.307442617 +0000 UTC m=+0.045392847 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:53 localhost systemd[1]: Started libcrun container. Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.428225972 +0000 UTC m=+0.166176142 container init aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:53 localhost systemd[1]: tmp-crun.qjucLs.mount: Deactivated successfully. Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.44689437 +0000 UTC m=+0.184844560 container start aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.449008505 +0000 UTC m=+0.186958735 container attach aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True) Dec 2 04:58:53 localhost eager_feistel[303282]: 167 167 Dec 2 04:58:53 localhost systemd[1]: libpod-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope: Deactivated successfully. Dec 2 04:58:53 localhost podman[303267]: 2025-12-02 09:58:53.450822131 +0000 UTC m=+0.188772331 container died aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:53 localhost podman[303287]: 2025-12-02 09:58:53.542155157 +0000 UTC m=+0.079248649 container remove aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, release=1763362218, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7) Dec 2 04:58:53 localhost nova_compute[281045]: 2025-12-02 09:58:53.543 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:53 localhost nova_compute[281045]: 2025-12-02 09:58:53.544 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:53 localhost systemd[1]: libpod-conmon-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope: Deactivated successfully. Dec 2 04:58:54 localhost ceph-mon[301710]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:54 localhost ceph-mon[301710]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:54 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:54 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:54 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:58:54 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:54 localhost podman[303355]: Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.200735178 +0000 UTC m=+0.083654443 container create c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, build-date=2025-11-26T19:44:28Z, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=) Dec 2 04:58:54 localhost systemd[1]: Started libpod-conmon-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope. Dec 2 04:58:54 localhost systemd[1]: Started libcrun container. Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.163008407 +0000 UTC m=+0.045927692 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.263692689 +0000 UTC m=+0.146611944 container init c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.273188028 +0000 UTC m=+0.156107293 container start c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, version=7) Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.273470317 +0000 UTC m=+0.156389552 container attach c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public) Dec 2 04:58:54 localhost youthful_cerf[303370]: 167 167 Dec 2 04:58:54 localhost systemd[1]: libpod-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope: Deactivated successfully. Dec 2 04:58:54 localhost podman[303355]: 2025-12-02 09:58:54.276035895 +0000 UTC m=+0.158955160 container died c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, release=1763362218, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public) Dec 2 04:58:54 localhost systemd[1]: tmp-crun.tkDCzX.mount: Deactivated successfully. Dec 2 04:58:54 localhost systemd[1]: var-lib-containers-storage-overlay-8bd2fdd2c9a493bc38c8b9420a1d473bcc0050df428b713cd211592315aec6cd-merged.mount: Deactivated successfully. Dec 2 04:58:54 localhost systemd[1]: var-lib-containers-storage-overlay-db2dffee161b65f57f5f95e74589b0168517f5da6ca64dfc9c445f128c2ec216-merged.mount: Deactivated successfully. Dec 2 04:58:54 localhost podman[303375]: 2025-12-02 09:58:54.378427829 +0000 UTC m=+0.087717237 container remove c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Dec 2 04:58:54 localhost systemd[1]: libpod-conmon-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope: Deactivated successfully. Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.675 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.676 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:54 localhost nova_compute[281045]: 2025-12-02 09:58:54.676 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:55 localhost ceph-mon[301710]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:55 localhost ceph-mon[301710]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:55 localhost ceph-mon[301710]: Saving service mon spec with placement label:mon Dec 2 04:58:55 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:58:55 localhost podman[303452]: Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.234512805 +0000 UTC m=+0.066493229 container create d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 2 04:58:55 localhost systemd[1]: Started libpod-conmon-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope. Dec 2 04:58:55 localhost systemd[1]: Started libcrun container. Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.203909232 +0000 UTC m=+0.035889666 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.303419217 +0000 UTC m=+0.135399631 container init d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.31237521 +0000 UTC m=+0.144355594 container start d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux , ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.312725531 +0000 UTC m=+0.144705955 container attach d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, vendor=Red Hat, Inc., ceph=True) Dec 2 04:58:55 localhost crazy_kalam[303467]: 167 167 Dec 2 04:58:55 localhost systemd[1]: libpod-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope: Deactivated successfully. Dec 2 04:58:55 localhost podman[303452]: 2025-12-02 09:58:55.315882797 +0000 UTC m=+0.147863221 container died d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z) Dec 2 04:58:55 localhost systemd[1]: var-lib-containers-storage-overlay-20245e23ab31c2e5ab74780b8cd433c5950892d5a190813f0ed10f98125e47a6-merged.mount: Deactivated successfully. Dec 2 04:58:55 localhost podman[303472]: 2025-12-02 09:58:55.411805784 +0000 UTC m=+0.082527209 container remove d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , release=1763362218, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Dec 2 04:58:55 localhost systemd[1]: libpod-conmon-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope: Deactivated successfully. Dec 2 04:58:55 localhost nova_compute[281045]: 2025-12-02 09:58:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:56 localhost podman[303549]: Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.172461468 +0000 UTC m=+0.046502499 container create 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7) Dec 2 04:58:56 localhost systemd[1]: Started libpod-conmon-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope. Dec 2 04:58:56 localhost ceph-mon[301710]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:56 localhost ceph-mon[301710]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:56 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:56 localhost systemd[1]: Started libcrun container. Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.239799453 +0000 UTC m=+0.113840504 container init 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:56 localhost affectionate_mccarthy[303564]: 167 167 Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.246808847 +0000 UTC m=+0.120849908 container start 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux ) Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.247095896 +0000 UTC m=+0.121136957 container attach 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, com.redhat.component=rhceph-container, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:58:56 localhost systemd[1]: libpod-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope: Deactivated successfully. Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.24954354 +0000 UTC m=+0.123584641 container died 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7) Dec 2 04:58:56 localhost podman[303549]: 2025-12-02 09:58:56.152252212 +0000 UTC m=+0.026293273 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:56 localhost podman[303569]: 2025-12-02 09:58:56.331304134 +0000 UTC m=+0.071680898 container remove 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, io.openshift.expose-services=, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:56 localhost systemd[1]: libpod-conmon-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope: Deactivated successfully. Dec 2 04:58:56 localhost systemd[1]: var-lib-containers-storage-overlay-3b3da472257dce8851c305b949cf3b9500004340f7e88cbb81c0ade7fefeb9ec-merged.mount: Deactivated successfully. Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:58:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:58:56 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1267034607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:58:56 localhost nova_compute[281045]: 2025-12-02 09:58:56.960 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:58:57 localhost podman[303659]: Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.101701217 +0000 UTC m=+0.081872059 container create fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.130 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11959MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:57 localhost systemd[1]: Started libpod-conmon-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope. Dec 2 04:58:57 localhost systemd[1]: Started libcrun container. Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.172923819 +0000 UTC m=+0.153094691 container init fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.074466076 +0000 UTC m=+0.054636968 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:57 localhost systemd[1]: tmp-crun.1vwcUb.mount: Deactivated successfully. Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.186243476 +0000 UTC m=+0.166414358 container start fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph) Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.186523414 +0000 UTC m=+0.166694296 container attach fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:57 localhost pedantic_raman[303674]: 167 167 Dec 2 04:58:57 localhost systemd[1]: libpod-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope: Deactivated successfully. Dec 2 04:58:57 localhost podman[303659]: 2025-12-02 09:58:57.189792774 +0000 UTC m=+0.169963656 container died fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:57 localhost ceph-mon[301710]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:57 localhost ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:57 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:57 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:57 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:57 localhost podman[303679]: 2025-12-02 09:58:57.262393709 +0000 UTC m=+0.064390516 container remove fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True) Dec 2 04:58:57 localhost systemd[1]: libpod-conmon-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope: Deactivated successfully. Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.306 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.335 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:58:57 localhost systemd[1]: var-lib-containers-storage-overlay-495600bccb1dc696ff159e62514e2403c9fc286289c8abd65f5f5bcfa3408f88-merged.mount: Deactivated successfully. Dec 2 04:58:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054722 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:58:57 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2732574246' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.748 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.756 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.776 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.780 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:58:57 localhost nova_compute[281045]: 2025-12-02 09:58:57.781 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:58:57 localhost podman[303776]: Dec 2 04:58:57 localhost podman[303776]: 2025-12-02 09:58:57.923565109 +0000 UTC m=+0.058905188 container create 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 2 04:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:58:57 localhost systemd[1]: tmp-crun.2Kwi84.mount: Deactivated successfully. Dec 2 04:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:58:57 localhost systemd[1]: Started libpod-conmon-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope. Dec 2 04:58:57 localhost podman[303768]: 2025-12-02 09:58:57.943320971 +0000 UTC m=+0.093659188 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 04:58:57 localhost systemd[1]: Started libcrun container. Dec 2 04:58:57 localhost podman[303768]: 2025-12-02 09:58:57.972006477 +0000 UTC m=+0.122344694 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:58:57 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:58:57 localhost podman[303776]: 2025-12-02 09:58:57.896575026 +0000 UTC m=+0.031915115 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:58 localhost podman[303799]: 2025-12-02 09:58:58.014571045 +0000 UTC m=+0.070350627 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 04:58:58 localhost podman[303826]: 2025-12-02 09:58:58.039017991 +0000 UTC m=+0.054867645 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 04:58:58 localhost podman[303776]: 2025-12-02 09:58:58.102434355 +0000 UTC m=+0.237774414 container init 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, vcs-type=git, release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:58 localhost podman[303776]: 2025-12-02 09:58:58.110284125 +0000 UTC m=+0.245624164 container start 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, ceph=True, release=1763362218, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Dec 2 04:58:58 localhost podman[303776]: 2025-12-02 09:58:58.110716718 +0000 UTC m=+0.246056787 container attach 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, ceph=True, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:58 localhost strange_lederberg[303812]: 167 167 Dec 2 04:58:58 localhost systemd[1]: libpod-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope: Deactivated successfully. Dec 2 04:58:58 localhost podman[303776]: 2025-12-02 09:58:58.113027409 +0000 UTC m=+0.248367488 container died 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:58 localhost podman[303826]: 2025-12-02 09:58:58.143777137 +0000 UTC m=+0.159626791 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 04:58:58 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:58:58 localhost podman[303801]: 2025-12-02 09:58:58.232559806 +0000 UTC m=+0.286086329 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:58:58 localhost ceph-mon[301710]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:58 localhost ceph-mon[301710]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:58 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:58 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:58 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:58 localhost podman[303801]: 2025-12-02 09:58:58.240853289 +0000 UTC m=+0.294379812 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:58:58 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:58:58 localhost podman[303799]: 2025-12-02 09:58:58.252190505 +0000 UTC m=+0.307970087 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:58:58 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:58:58 localhost podman[303864]: 2025-12-02 09:58:58.337653131 +0000 UTC m=+0.213847254 container remove 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.expose-services=) Dec 2 04:58:58 localhost systemd[1]: var-lib-containers-storage-overlay-229bd05a0662cada040fc5af96fa1bb6353596f71e74bec2dac31c7b2190f361-merged.mount: Deactivated successfully. Dec 2 04:58:58 localhost systemd[1]: libpod-conmon-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope: Deactivated successfully. Dec 2 04:58:58 localhost nova_compute[281045]: 2025-12-02 09:58:58.783 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:59 localhost ceph-mon[301710]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:58:59 localhost ceph-mon[301710]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:59:00 localhost ceph-mon[301710]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:59:00 localhost ceph-mon[301710]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:59:00 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:00 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:00 localhost ceph-mon[301710]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:59:00 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:59:00 localhost ceph-mon[301710]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:59:01 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:01 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:01 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:01 localhost systemd[1]: session-68.scope: Deactivated successfully. Dec 2 04:59:01 localhost systemd[1]: session-68.scope: Consumed 1.656s CPU time. Dec 2 04:59:01 localhost systemd-logind[760]: Session 68 logged out. Waiting for processes to exit. Dec 2 04:59:01 localhost systemd-logind[760]: Removed session 68. Dec 2 04:59:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:59:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:59:03.170 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:03 localhost ovn_metadata_agent[159477]: 2025-12-02 09:59:03.171 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:03 localhost podman[239757]: time="2025-12-02T09:59:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:59:03 localhost podman[239757]: @ - - [02/Dec/2025:09:59:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:59:03 localhost podman[239757]: @ - - [02/Dec/2025:09:59:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19179 "" "Go-http-client/1.1" Dec 2 04:59:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:59:08 localhost podman[303911]: 2025-12-02 09:59:08.601701271 +0000 UTC m=+0.094788212 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:59:08 localhost podman[303911]: 2025-12-02 09:59:08.613957965 +0000 UTC m=+0.107044926 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:59:08 localhost podman[303912]: 2025-12-02 09:59:08.654688788 +0000 UTC m=+0.144033905 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Dec 2 04:59:08 localhost podman[303912]: 2025-12-02 09:59:08.671962105 +0000 UTC m=+0.161307212 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9) Dec 2 04:59:08 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:59:08 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:59:11 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 04:59:11 localhost systemd[299678]: Activating special unit Exit the Session... Dec 2 04:59:11 localhost systemd[299678]: Stopped target Main User Target. Dec 2 04:59:11 localhost systemd[299678]: Stopped target Basic System. Dec 2 04:59:11 localhost systemd[299678]: Stopped target Paths. Dec 2 04:59:11 localhost systemd[299678]: Stopped target Sockets. Dec 2 04:59:11 localhost systemd[299678]: Stopped target Timers. Dec 2 04:59:11 localhost systemd[299678]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:59:11 localhost systemd[299678]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:59:11 localhost systemd[299678]: Closed D-Bus User Message Bus Socket. Dec 2 04:59:11 localhost systemd[299678]: Stopped Create User's Volatile Files and Directories. Dec 2 04:59:11 localhost systemd[299678]: Removed slice User Application Slice. Dec 2 04:59:11 localhost systemd[299678]: Reached target Shutdown. Dec 2 04:59:11 localhost systemd[299678]: Finished Exit the Session. Dec 2 04:59:11 localhost systemd[299678]: Reached target Exit the Session. Dec 2 04:59:11 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 04:59:11 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 04:59:11 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 04:59:12 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 04:59:12 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 04:59:12 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 04:59:12 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 04:59:12 localhost systemd[1]: user-1003.slice: Consumed 2.131s CPU time. Dec 2 04:59:12 localhost openstack_network_exporter[241816]: ERROR 09:59:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:59:12 localhost openstack_network_exporter[241816]: ERROR 09:59:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:12 localhost openstack_network_exporter[241816]: ERROR 09:59:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:12 localhost openstack_network_exporter[241816]: ERROR 09:59:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:59:12 localhost openstack_network_exporter[241816]: Dec 2 04:59:12 localhost openstack_network_exporter[241816]: ERROR 09:59:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:59:12 localhost openstack_network_exporter[241816]: Dec 2 04:59:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:59:13 localhost podman[303956]: 2025-12-02 09:59:13.080749922 +0000 UTC m=+0.078805006 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:59:13 localhost podman[303956]: 2025-12-02 09:59:13.095246424 +0000 UTC m=+0.093301458 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:59:13 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:59:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 04:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 04:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 04:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 04:59:29 localhost systemd[297067]: Starting Mark boot as successful... Dec 2 04:59:29 localhost podman[303977]: 2025-12-02 09:59:29.073703114 +0000 UTC m=+0.070516833 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:59:29 localhost systemd[297067]: Finished Mark boot as successful. Dec 2 04:59:29 localhost podman[303977]: 2025-12-02 09:59:29.106510455 +0000 UTC m=+0.103324164 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:59:29 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 04:59:29 localhost podman[303976]: 2025-12-02 09:59:29.191639491 +0000 UTC m=+0.190113211 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:59:29 localhost podman[303976]: 2025-12-02 09:59:29.201000397 +0000 UTC m=+0.199474107 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:59:29 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 04:59:29 localhost podman[303986]: 2025-12-02 09:59:29.160949305 +0000 UTC m=+0.147122610 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:59:29 localhost podman[303978]: 2025-12-02 09:59:29.256042766 +0000 UTC m=+0.246088558 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 2 04:59:29 localhost podman[303978]: 2025-12-02 09:59:29.267763013 +0000 UTC m=+0.257808825 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 2 04:59:29 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 04:59:29 localhost podman[303986]: 2025-12-02 09:59:29.355141139 +0000 UTC m=+0.341314474 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 04:59:29 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 04:59:29 localhost sshd[304061]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:59:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:33 localhost podman[239757]: time="2025-12-02T09:59:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:59:33 localhost podman[239757]: @ - - [02/Dec/2025:09:59:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 04:59:33 localhost podman[239757]: @ - - [02/Dec/2025:09:59:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19177 "" "Go-http-client/1.1" Dec 2 04:59:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 04:59:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 04:59:39 localhost podman[304064]: 2025-12-02 09:59:39.208536658 +0000 UTC m=+0.212665499 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.openshift.expose-services=, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:59:39 localhost podman[304064]: 2025-12-02 09:59:39.30721263 +0000 UTC m=+0.311341531 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Dec 2 04:59:39 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 04:59:39 localhost podman[304063]: 2025-12-02 09:59:39.273161048 +0000 UTC m=+0.277966080 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:59:39 localhost podman[304063]: 2025-12-02 09:59:39.35702128 +0000 UTC m=+0.361826312 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:59:39 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 04:59:42 localhost openstack_network_exporter[241816]: ERROR 09:59:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:59:42 localhost openstack_network_exporter[241816]: ERROR 09:59:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:42 localhost openstack_network_exporter[241816]: ERROR 09:59:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:59:42 localhost openstack_network_exporter[241816]: Dec 2 04:59:42 localhost openstack_network_exporter[241816]: ERROR 09:59:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:42 localhost openstack_network_exporter[241816]: ERROR 09:59:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:59:42 localhost openstack_network_exporter[241816]: Dec 2 04:59:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 04:59:44 localhost systemd[1]: tmp-crun.3MJGFn.mount: Deactivated successfully. Dec 2 04:59:44 localhost podman[304104]: 2025-12-02 09:59:44.056159285 +0000 UTC m=+0.066817407 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:59:44 localhost podman[304104]: 2025-12-02 09:59:44.096955012 +0000 UTC m=+0.107613124 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:59:44 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 04:59:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) Dec 2 04:59:50 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/4215452679' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch Dec 2 04:59:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:53 localhost nova_compute[281045]: 2025-12-02 09:59:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:55 localhost nova_compute[281045]: 2025-12-02 09:59:55.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:55 localhost nova_compute[281045]: 2025-12-02 09:59:55.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:55 localhost nova_compute[281045]: 2025-12-02 09:59:55.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.547 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.548 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.549 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.571 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.573 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:59:56 localhost nova_compute[281045]: 2025-12-02 09:59:56.573 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:59:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:59:57 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2437377944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.017 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.195 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.196 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11999MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.197 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.197 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.267 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.268 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.286 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:59:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:59:57 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2802866471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.769 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.774 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.801 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.804 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:59:57 localhost nova_compute[281045]: 2025-12-02 09:59:57.804 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:58 localhost nova_compute[281045]: 2025-12-02 09:59:58.784 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:58 localhost nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:58 localhost nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:58 localhost nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:00:00 localhost systemd[1]: tmp-crun.2d2oan.mount: Deactivated successfully. Dec 2 05:00:00 localhost podman[304168]: 2025-12-02 10:00:00.112998573 +0000 UTC m=+0.072133158 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:00:00 localhost podman[304170]: 2025-12-02 10:00:00.174520708 +0000 UTC m=+0.125359982 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:00:00 localhost podman[304168]: 2025-12-02 10:00:00.192269666 +0000 UTC m=+0.151404241 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:00:00 localhost podman[304169]: 2025-12-02 10:00:00.154093168 +0000 UTC m=+0.105678184 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible) Dec 2 05:00:00 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:00:00 localhost podman[304170]: 2025-12-02 10:00:00.206871919 +0000 UTC m=+0.157711193 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:00:00 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:00:00 localhost podman[304167]: 2025-12-02 10:00:00.266790826 +0000 UTC m=+0.225275782 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:00:00 localhost podman[304167]: 2025-12-02 10:00:00.271850219 +0000 UTC m=+0.230335195 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:00:00 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:00:00 localhost podman[304169]: 2025-12-02 10:00:00.291893947 +0000 UTC m=+0.243478913 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm) Dec 2 05:00:00 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:00:00 localhost ceph-mon[301710]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[301710]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[301710]: stray daemon mgr.np0005541911.adcgiw on host np0005541911.localdomain not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[301710]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[301710]: stray host np0005541911.localdomain has 1 stray daemons: ['mgr.np0005541911.adcgiw'] Dec 2 05:00:01 localhost systemd[1]: tmp-crun.is8bts.mount: Deactivated successfully. Dec 2 05:00:01 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:01 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 05:00:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:00:03.171 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:00:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:00:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:00:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:00:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:00:03 localhost podman[239757]: time="2025-12-02T10:00:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:00:03 localhost podman[239757]: @ - - [02/Dec/2025:10:00:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:00:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:00:03 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2832629924' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:00:03 localhost podman[239757]: @ - - [02/Dec/2025:10:00:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1" Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.493281) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605493354, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 14043, "num_deletes": 261, "total_data_size": 24103905, "memory_usage": 25278912, "flush_reason": "Manual Compaction"} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605608518, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18536123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 14048, "table_properties": {"data_size": 18463960, "index_size": 40249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 327875, "raw_average_key_size": 26, "raw_value_size": 18252036, "raw_average_value_size": 1479, "num_data_blocks": 1542, "num_entries": 12334, "num_filter_entries": 12334, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669504, "oldest_key_time": 1764669504, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 115304 microseconds, and 36661 cpu microseconds. Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.608579) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18536123 bytes OK Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.608615) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610641) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610698) EVENT_LOG_v1 {"time_micros": 1764669605610686, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 24011403, prev total WAL file size 24011727, number of live WAL files 2. Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.615077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1762B)] Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605615209, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18537885, "oldest_snapshot_seqno": -1} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 12082 keys, 18532582 bytes, temperature: kUnknown Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605746133, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18532582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18461069, "index_size": 40244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 322992, "raw_average_key_size": 26, "raw_value_size": 18252441, "raw_average_value_size": 1510, "num_data_blocks": 1542, "num_entries": 12082, "num_filter_entries": 12082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.746518) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18532582 bytes Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.748367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 141.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.7, 0.0 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 12339, records dropped: 257 output_compression: NoCompression Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.748409) EVENT_LOG_v1 {"time_micros": 1764669605748389, "job": 4, "event": "compaction_finished", "compaction_time_micros": 131025, "compaction_time_cpu_micros": 51233, "output_level": 6, "num_output_files": 1, "total_output_size": 18532582, "num_input_records": 12339, "num_output_records": 12082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605752071, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605752169, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 2 05:00:05 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.614899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 e91: 6 total, 6 up, 6 in Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr handle_mgr_map Activating! Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr handle_mgr_map I am now activating Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0 Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 1 Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch Dec 2 05:00:06 localhost ceph-mgr[287188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: balancer Dec 2 05:00:06 localhost ceph-mgr[287188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: [balancer INFO root] Starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:00:06 Dec 2 05:00:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:00:06 localhost ceph-mgr[287188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: cephadm Dec 2 05:00:06 localhost ceph-mgr[287188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: crash Dec 2 05:00:06 localhost systemd[1]: session-70.scope: Deactivated successfully. Dec 2 05:00:06 localhost systemd[1]: session-70.scope: Consumed 11.185s CPU time. Dec 2 05:00:06 localhost ceph-mgr[287188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: devicehealth Dec 2 05:00:06 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Dec 2 05:00:06 localhost ceph-mgr[287188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: iostat Dec 2 05:00:06 localhost ceph-mgr[287188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: nfs Dec 2 05:00:06 localhost ceph-mgr[287188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost systemd-logind[760]: Removed session 70. Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: orchestrator Dec 2 05:00:06 localhost ceph-mgr[287188]: [devicehealth INFO root] Starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: pg_autoscaler Dec 2 05:00:06 localhost ceph-mgr[287188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: progress Dec 2 05:00:06 localhost ceph-mgr[287188]: [progress INFO root] Loading... Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 2 05:00:06 localhost ceph-mgr[287188]: [progress INFO root] Loaded OSDMap, ready. Dec 2 05:00:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:00:06 localhost ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: Activating manager daemon np0005541914.lljzmk Dec 2 05:00:06 localhost ceph-mon[301710]: from='client.? 172.18.0.200:0/1313402171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 05:00:06 localhost ceph-mon[301710]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 05:00:06 localhost ceph-mon[301710]: Manager daemon np0005541914.lljzmk is now available Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] recovery thread starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] starting setup Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: rbd_support Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 05:00:06 localhost ceph-mgr[287188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: restful Dec 2 05:00:06 localhost ceph-mgr[287188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: status Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [restful INFO root] server_addr: :: server_port: 8003 Dec 2 05:00:06 localhost ceph-mgr[287188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: [restful WARNING root] server not running: no certificate configured Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: telemetry Dec 2 05:00:06 localhost ceph-mgr[287188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] PerfHandler: starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:00:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:00:06 localhost ceph-mgr[287188]: mgr load Constructed class from module: volumes Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: images, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TaskHandler: starting Dec 2 05:00:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} v 0) Dec 2 05:00:06 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 2 05:00:06 localhost ceph-mgr[287188]: [rbd_support INFO root] setup complete Dec 2 05:00:07 localhost sshd[304476]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:00:07 localhost systemd-logind[760]: New session 71 of user ceph-admin. Dec 2 05:00:07 localhost systemd[1]: Started Session 71 of User ceph-admin. Dec 2 05:00:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:07 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 05:00:08 localhost podman[304583]: 2025-12-02 10:00:08.152357053 +0000 UTC m=+0.077324856 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z) Dec 2 05:00:08 localhost podman[304583]: 2025-12-02 10:00:08.267974048 +0000 UTC m=+0.192941831 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, release=1763362218, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 2 05:00:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:00:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:00:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:00:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:00:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[301710]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 2 05:00:08 localhost ceph-mon[301710]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 2 05:00:08 localhost ceph-mon[301710]: Cluster is now healthy Dec 2 05:00:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:08] ENGINE Bus STARTING Dec 2 05:00:08 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:08] ENGINE Bus STARTING Dec 2 05:00:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:00:09 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765 Dec 2 05:00:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765 Dec 2 05:00:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:00:09 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150 Dec 2 05:00:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150 Dec 2 05:00:09 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Bus STARTED Dec 2 05:00:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Bus STARTED Dec 2 05:00:09 localhost ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 05:00:09 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 05:00:09 localhost ceph-mgr[287188]: [devicehealth INFO root] Check health Dec 2 05:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:00:09 localhost systemd[1]: tmp-crun.jpemxS.mount: Deactivated successfully. Dec 2 05:00:09 localhost podman[304822]: 2025-12-02 10:00:09.922765061 +0000 UTC m=+0.090992471 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.) Dec 2 05:00:09 localhost podman[304822]: 2025-12-02 10:00:09.933407783 +0000 UTC m=+0.101635133 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:00:09 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:00:09 localhost ceph-mon[301710]: [02/Dec/2025:10:00:08] ENGINE Bus STARTING Dec 2 05:00:09 localhost ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765 Dec 2 05:00:09 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:09 localhost ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150 Dec 2 05:00:09 localhost ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Bus STARTED Dec 2 05:00:09 localhost ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 05:00:09 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:10 localhost podman[304821]: 2025-12-02 10:00:10.017643177 +0000 UTC m=+0.183762772 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:00:10 localhost podman[304821]: 2025-12-02 10:00:10.067975723 +0000 UTC m=+0.234095268 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:00:10 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.558630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610558717, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 490, "num_deletes": 252, "total_data_size": 1628548, "memory_usage": 1641112, "flush_reason": "Manual Compaction"} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610571619, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1063710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14053, "largest_seqno": 14538, "table_properties": {"data_size": 1060733, "index_size": 960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8165, "raw_average_key_size": 21, "raw_value_size": 1054438, "raw_average_value_size": 2819, "num_data_blocks": 38, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669605, "oldest_key_time": 1764669605, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13074 microseconds, and 5062 cpu microseconds. Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571707) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1063710 bytes OK Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571739) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573385) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573409) EVENT_LOG_v1 {"time_micros": 1764669610573403, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1625430, prev total WAL file size 1625754, number of live WAL files 2. Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.574244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end) Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1038KB)], [15(17MB)] Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610574310, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 19596292, "oldest_snapshot_seqno": -1} Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11924 keys, 17247608 bytes, temperature: kUnknown Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610697302, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 17247608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17181691, "index_size": 35032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29829, "raw_key_size": 320068, "raw_average_key_size": 26, "raw_value_size": 16980335, "raw_average_value_size": 1424, "num_data_blocks": 1326, "num_entries": 11924, "num_filter_entries": 11924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.697733) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 17247608 bytes Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.699284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 140.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.7 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(34.6) write-amplify(16.2) OK, records in: 12456, records dropped: 532 output_compression: NoCompression Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.699317) EVENT_LOG_v1 {"time_micros": 1764669610699302, "job": 6, "event": "compaction_finished", "compaction_time_micros": 123089, "compaction_time_cpu_micros": 47580, "output_level": 6, "num_output_files": 1, "total_output_size": 17247608, "num_input_records": 12456, "num_output_records": 11924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610699666, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610702239, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.574141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:00:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:00:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:00:11 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:00:11 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:11 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:11 localhost ceph-mgr[287188]: mgr.server handle_open ignoring open from mgr.np0005541912.qwddia 172.18.0.106:0/2644696530; not ready for session (expect reconnect) Dec 2 05:00:11 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:11 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:12 localhost openstack_network_exporter[241816]: ERROR 10:00:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:00:12 localhost openstack_network_exporter[241816]: ERROR 10:00:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:12 localhost openstack_network_exporter[241816]: ERROR 10:00:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:12 localhost openstack_network_exporter[241816]: ERROR 10:00:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:00:12 localhost openstack_network_exporter[241816]: Dec 2 05:00:12 localhost openstack_network_exporter[241816]: ERROR 10:00:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:00:12 localhost openstack_network_exporter[241816]: Dec 2 05:00:12 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:00:12 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:12 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} v 0) Dec 2 05:00:12 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch Dec 2 05:00:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:13 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] Completed event 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:00:13 localhost ceph-mgr[287188]: [progress INFO root] Completed event 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:00:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:00:13 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:00:13 localhost sshd[305576]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:00:14 localhost ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Dec 2 05:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:00:15 localhost podman[305578]: 2025-12-02 10:00:15.070090375 +0000 UTC m=+0.073887470 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:00:15 localhost podman[305578]: 2025-12-02 10:00:15.112100619 +0000 UTC m=+0.115897704 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:00:15 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Dec 2 05:00:16 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:00:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:00:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 2 05:00:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 05:00:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 05:00:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 05:00:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:00:31 localhost systemd[1]: tmp-crun.jTmPGt.mount: Deactivated successfully. Dec 2 05:00:31 localhost podman[305597]: 2025-12-02 10:00:31.090107507 +0000 UTC m=+0.090331551 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:00:31 localhost podman[305597]: 2025-12-02 10:00:31.123820759 +0000 UTC m=+0.124044813 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0) Dec 2 05:00:31 localhost systemd[1]: tmp-crun.Cf7Rv0.mount: Deactivated successfully. Dec 2 05:00:31 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:00:31 localhost podman[305598]: 2025-12-02 10:00:31.135422151 +0000 UTC m=+0.133549411 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:00:31 localhost podman[305598]: 2025-12-02 10:00:31.145895867 +0000 UTC m=+0.144023127 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:00:31 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:00:31 localhost podman[305599]: 2025-12-02 10:00:31.186579741 +0000 UTC m=+0.179580886 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true) Dec 2 05:00:31 localhost podman[305599]: 2025-12-02 10:00:31.200825283 +0000 UTC m=+0.193826448 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:00:31 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:00:31 localhost podman[305600]: 2025-12-02 10:00:31.238145924 +0000 UTC m=+0.227590581 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:00:31 localhost podman[305600]: 2025-12-02 10:00:31.303799966 +0000 UTC m=+0.293244613 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:00:31 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:00:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:32 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:33 localhost podman[239757]: time="2025-12-02T10:00:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:00:33 localhost podman[239757]: @ - - [02/Dec/2025:10:00:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:00:33 localhost podman[239757]: @ - - [02/Dec/2025:10:00:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19191 "" "Go-http-client/1.1" Dec 2 05:00:34 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:36 localhost sshd[305678]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:00:36 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:00:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:00:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:38 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:00:40 localhost podman[305680]: 2025-12-02 10:00:40.057380271 +0000 UTC m=+0.060647820 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9) Dec 2 05:00:40 localhost podman[305680]: 2025-12-02 10:00:40.070859289 +0000 UTC m=+0.074126898 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9) Dec 2 05:00:40 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:00:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:00:40 localhost systemd[1]: tmp-crun.uhg10h.mount: Deactivated successfully. Dec 2 05:00:40 localhost podman[305699]: 2025-12-02 10:00:40.173614995 +0000 UTC m=+0.068007863 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:00:40 localhost podman[305699]: 2025-12-02 10:00:40.18399873 +0000 UTC m=+0.078391628 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:00:40 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:00:40 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:42 localhost openstack_network_exporter[241816]: ERROR 10:00:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:00:42 localhost openstack_network_exporter[241816]: ERROR 10:00:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:42 localhost openstack_network_exporter[241816]: ERROR 10:00:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:42 localhost openstack_network_exporter[241816]: ERROR 10:00:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:00:42 localhost openstack_network_exporter[241816]: Dec 2 05:00:42 localhost openstack_network_exporter[241816]: ERROR 10:00:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:00:42 localhost openstack_network_exporter[241816]: Dec 2 05:00:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:42 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:44 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:00:46 localhost podman[305722]: 2025-12-02 10:00:46.052269113 +0000 UTC m=+0.057316159 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:00:46 localhost podman[305722]: 2025-12-02 10:00:46.059401629 +0000 UTC m=+0.064448685 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible) Dec 2 05:00:46 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:00:46 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:48 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:50 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:52 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:54 localhost nova_compute[281045]: 2025-12-02 10:00:54.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:54 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:56 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:00:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:57 localhost nova_compute[281045]: 2025-12-02 10:00:57.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:57 localhost nova_compute[281045]: 2025-12-02 10:00:57.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:57 localhost nova_compute[281045]: 2025-12-02 10:00:57.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:00:57 localhost nova_compute[281045]: 2025-12-02 10:00:57.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:00:58 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.656735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660656799, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 256, "total_data_size": 1793915, "memory_usage": 1817584, "flush_reason": "Manual Compaction"} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660667778, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1174858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14543, "largest_seqno": 15490, "table_properties": {"data_size": 1170557, "index_size": 1964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10294, "raw_average_key_size": 20, "raw_value_size": 1161498, "raw_average_value_size": 2290, "num_data_blocks": 82, "num_entries": 507, "num_filter_entries": 507, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669610, "oldest_key_time": 1764669610, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11086 microseconds, and 3620 cpu microseconds. Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667825) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1174858 bytes OK Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667849) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669536) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669550) EVENT_LOG_v1 {"time_micros": 1764669660669546, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1788903, prev total WAL file size 1789227, number of live WAL files 2. Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.670083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303139' seq:0, type:0; will stop at (end) Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1147KB)], [18(16MB)] Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660670144, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18422466, "oldest_snapshot_seqno": -1} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11894 keys, 18282658 bytes, temperature: kUnknown Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660788920, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 18282658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18215448, "index_size": 36389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 320669, "raw_average_key_size": 26, "raw_value_size": 18013012, "raw_average_value_size": 1514, "num_data_blocks": 1380, "num_entries": 11894, "num_filter_entries": 11894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.789268) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 18282658 bytes Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.791120) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 153.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 16.4 +0.0 blob) out(17.4 +0.0 blob), read-write-amplify(31.2) write-amplify(15.6) OK, records in: 12431, records dropped: 537 output_compression: NoCompression Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.791166) EVENT_LOG_v1 {"time_micros": 1764669660791148, "job": 8, "event": "compaction_finished", "compaction_time_micros": 118863, "compaction_time_cpu_micros": 33415, "output_level": 6, "num_output_files": 1, "total_output_size": 18282658, "num_input_records": 12431, "num_output_records": 11894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660791471, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660793181, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:01:02 localhost podman[305755]: 2025-12-02 10:01:02.086500335 +0000 UTC m=+0.070990733 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:01:02 localhost podman[305755]: 2025-12-02 10:01:02.092129506 +0000 UTC m=+0.076619924 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:01:02 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:01:02 localhost podman[305754]: 2025-12-02 10:01:02.133250803 +0000 UTC m=+0.122936089 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:01:02 localhost podman[305754]: 2025-12-02 10:01:02.141602346 +0000 UTC m=+0.131287632 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:01:02 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:01:02 localhost podman[305765]: 2025-12-02 10:01:02.142345218 +0000 UTC m=+0.122076691 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:01:02 localhost podman[305753]: 2025-12-02 10:01:02.216164906 +0000 UTC m=+0.205984226 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:01:02 localhost podman[305765]: 2025-12-02 10:01:02.239929447 +0000 UTC m=+0.219660950 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:01:02 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:01:02 localhost podman[305753]: 2025-12-02 10:01:02.294886964 +0000 UTC m=+0.284706264 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:01:02 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:01:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:02 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:03 localhost podman[239757]: time="2025-12-02T10:01:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:01:03 localhost podman[239757]: @ - - [02/Dec/2025:10:01:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:01:03 localhost podman[239757]: @ - - [02/Dec/2025:10:01:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1" Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.286 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.287 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:01:04 localhost nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:01:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:01:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:01:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:01:04 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.084 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.084 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.085 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.085 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.086 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:01:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:01:05 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/198510701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.496 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.716 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.718 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11993MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.719 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:05 localhost nova_compute[281045]: 2025-12-02 10:01:05.719 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:01:06 Dec 2 05:01:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:01:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:01:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'manila_metadata', 'images', 'volumes', 'manila_data'] Dec 2 05:01:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:01:06 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:01:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:01:06 localhost nova_compute[281045]: 2025-12-02 10:01:06.947 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:01:06 localhost nova_compute[281045]: 2025-12-02 10:01:06.948 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:01:06 localhost nova_compute[281045]: 2025-12-02 10:01:06.966 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:01:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:01:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:01:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1910608220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:01:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:07 localhost nova_compute[281045]: 2025-12-02 10:01:07.398 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:01:07 localhost nova_compute[281045]: 2025-12-02 10:01:07.404 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:01:07 localhost nova_compute[281045]: 2025-12-02 10:01:07.420 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:01:07 localhost nova_compute[281045]: 2025-12-02 10:01:07.424 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:01:07 localhost nova_compute[281045]: 2025-12-02 10:01:07.425 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:01:11 localhost systemd[1]: tmp-crun.79i7yl.mount: Deactivated successfully. Dec 2 05:01:11 localhost podman[305882]: 2025-12-02 10:01:11.087202444 +0000 UTC m=+0.092651600 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:01:11 localhost podman[305883]: 2025-12-02 10:01:11.133398585 +0000 UTC m=+0.136336775 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter) Dec 2 05:01:11 localhost podman[305883]: 2025-12-02 10:01:11.145915034 +0000 UTC m=+0.148853254 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container) Dec 2 05:01:11 localhost podman[305882]: 2025-12-02 10:01:11.15369834 +0000 UTC m=+0.159147466 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:01:11 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:01:11 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:01:12 localhost openstack_network_exporter[241816]: ERROR 10:01:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:01:12 localhost openstack_network_exporter[241816]: ERROR 10:01:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:12 localhost openstack_network_exporter[241816]: ERROR 10:01:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:12 localhost openstack_network_exporter[241816]: ERROR 10:01:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:01:12 localhost openstack_network_exporter[241816]: Dec 2 05:01:12 localhost openstack_network_exporter[241816]: ERROR 10:01:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:01:12 localhost openstack_network_exporter[241816]: Dec 2 05:01:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:01:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:01:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:01:15 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:01:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:01:15 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:01:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:01:15 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:01:15 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:01:15 localhost ceph-mgr[287188]: [progress INFO root] Completed event 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:01:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:01:15 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:01:16 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:16 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:01:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:01:17 localhost podman[306072]: 2025-12-02 10:01:17.094703519 +0000 UTC m=+0.098739566 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:01:17 localhost podman[306072]: 2025-12-02 10:01:17.10598224 +0000 UTC m=+0.110018177 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:01:17 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:01:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:27.143 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:01:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:27.144 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:01:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:01:29.147 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:01:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:32 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:01:33 localhost podman[306091]: 2025-12-02 10:01:33.085286276 +0000 UTC m=+0.084508973 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:01:33 localhost podman[306099]: 2025-12-02 10:01:33.14351552 +0000 UTC m=+0.133610991 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:01:33 localhost podman[306091]: 2025-12-02 10:01:33.164990042 +0000 UTC m=+0.164212709 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:01:33 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:01:33 localhost podman[306092]: 2025-12-02 10:01:33.247767232 +0000 UTC m=+0.242360070 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:01:33 localhost podman[306092]: 2025-12-02 10:01:33.255924339 +0000 UTC m=+0.250517137 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:01:33 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:01:33 localhost podman[306093]: 2025-12-02 10:01:33.31532658 +0000 UTC m=+0.306552896 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:01:33 localhost podman[306099]: 2025-12-02 10:01:33.323187969 +0000 UTC m=+0.313283490 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:01:33 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:01:33 localhost podman[306093]: 2025-12-02 10:01:33.375333539 +0000 UTC m=+0.366559865 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:01:33 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:01:33 localhost podman[239757]: time="2025-12-02T10:01:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:01:33 localhost podman[239757]: @ - - [02/Dec/2025:10:01:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:01:33 localhost podman[239757]: @ - - [02/Dec/2025:10:01:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19188 "" "Go-http-client/1.1" Dec 2 05:01:34 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:36 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:01:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:01:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:38 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:40 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:01:42 localhost podman[306175]: 2025-12-02 10:01:42.067113715 +0000 UTC m=+0.065020353 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 05:01:42 localhost podman[306175]: 2025-12-02 10:01:42.082979426 +0000 UTC m=+0.080886134 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41) Dec 2 05:01:42 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:01:42 localhost openstack_network_exporter[241816]: ERROR 10:01:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:01:42 localhost openstack_network_exporter[241816]: ERROR 10:01:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:42 localhost openstack_network_exporter[241816]: ERROR 10:01:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:42 localhost openstack_network_exporter[241816]: ERROR 10:01:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:01:42 localhost openstack_network_exporter[241816]: Dec 2 05:01:42 localhost openstack_network_exporter[241816]: ERROR 10:01:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:01:42 localhost openstack_network_exporter[241816]: Dec 2 05:01:42 localhost podman[306174]: 2025-12-02 10:01:42.179557084 +0000 UTC m=+0.178197945 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:01:42 localhost podman[306174]: 2025-12-02 10:01:42.190625839 +0000 UTC m=+0.189266720 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:01:42 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:01:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:42 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:44 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:46 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:01:48 localhost podman[306217]: 2025-12-02 10:01:48.091978316 +0000 UTC m=+0.097501927 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 2 05:01:48 localhost podman[306217]: 2025-12-02 10:01:48.103498316 +0000 UTC m=+0.109021967 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:01:48 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:01:48 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 2 05:01:50 localhost sshd[306235]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:01:50 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:52 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:54 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:55 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e92 e92: 6 total, 6 up, 6 in Dec 2 05:01:56 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail Dec 2 05:01:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 e93: 6 total, 6 up, 6 in Dec 2 05:01:58 localhost nova_compute[281045]: 2025-12-02 10:01:58.664 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:58 localhost nova_compute[281045]: 2025-12-02 10:01:58.665 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:58 localhost nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:58 localhost nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:58 localhost nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:58 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.685 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.686 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.687 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.725 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:01:59 localhost nova_compute[281045]: 2025-12-02 10:01:59.727 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:02:00 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1269445713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.189 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.397 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.398 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11985MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.399 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.399 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.587 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.587 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:02:00 localhost nova_compute[281045]: 2025-12-02 10:02:00.606 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:00 localhost ovn_controller[153778]: 2025-12-02T10:02:00Z|00038|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Dec 2 05:02:00 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s Dec 2 05:02:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:02:01 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2528476716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.017 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.024 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.133 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.136 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.137 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.978 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.993 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:01 localhost nova_compute[281045]: 2025-12-02 10:02:01.993 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:02:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:02 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s Dec 2 05:02:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:03 localhost podman[239757]: time="2025-12-02T10:02:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:02:03 localhost podman[239757]: @ - - [02/Dec/2025:10:02:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:02:03 localhost podman[239757]: @ - - [02/Dec/2025:10:02:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19184 "" "Go-http-client/1.1" Dec 2 05:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:02:04 localhost podman[306281]: 2025-12-02 10:02:04.088087053 +0000 UTC m=+0.087371911 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:02:04 localhost podman[306281]: 2025-12-02 10:02:04.12197104 +0000 UTC m=+0.121255838 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:02:04 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:02:04 localhost podman[306280]: 2025-12-02 10:02:04.197956594 +0000 UTC m=+0.197930142 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:02:04 localhost podman[306282]: 2025-12-02 10:02:04.256249312 +0000 UTC m=+0.247664961 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:02:04 localhost podman[306282]: 2025-12-02 10:02:04.266058399 +0000 UTC m=+0.257474038 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:02:04 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:02:04 localhost podman[306280]: 2025-12-02 10:02:04.283178998 +0000 UTC m=+0.283152546 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:02:04 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:02:04 localhost podman[306283]: 2025-12-02 10:02:04.303282677 +0000 UTC m=+0.294695376 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 2 05:02:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:02:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:02:04 localhost podman[306283]: 2025-12-02 10:02:04.370764593 +0000 UTC m=+0.362177312 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:02:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:02:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:02:04 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:02:04 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.5 MiB/s wr, 42 op/s Dec 2 05:02:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:02:06 Dec 2 05:02:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:02:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:02:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', 'vms', 'backups', 'images', '.mgr', 'volumes', 'manila_metadata'] Dec 2 05:02:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:02:06 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:02:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:02:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:02:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 3.0 KiB/s rd, 465 B/s wr, 4 op/s Dec 2 05:02:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s Dec 2 05:02:12 localhost openstack_network_exporter[241816]: ERROR 10:02:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:02:12 localhost openstack_network_exporter[241816]: ERROR 10:02:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:12 localhost openstack_network_exporter[241816]: ERROR 10:02:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:12 localhost openstack_network_exporter[241816]: ERROR 10:02:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:02:12 localhost openstack_network_exporter[241816]: Dec 2 05:02:12 localhost openstack_network_exporter[241816]: ERROR 10:02:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:02:12 localhost openstack_network_exporter[241816]: Dec 2 05:02:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s Dec 2 05:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:02:13 localhost podman[306365]: 2025-12-02 10:02:13.076235619 +0000 UTC m=+0.078994976 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:02:13 localhost podman[306365]: 2025-12-02 10:02:13.08384616 +0000 UTC m=+0.086605517 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:02:13 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:02:13 localhost podman[306366]: 2025-12-02 10:02:13.12077559 +0000 UTC m=+0.122171946 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal) Dec 2 05:02:13 localhost podman[306366]: 2025-12-02 10:02:13.157712499 +0000 UTC m=+0.159108885 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter) Dec 2 05:02:13 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:02:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:02:16 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:02:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:02:16 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:02:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:02:16 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:02:16 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:02:16 localhost ceph-mgr[287188]: [progress INFO root] Completed event 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:02:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:02:16 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:02:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:16 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:02:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:02:16 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:02:16 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:02:16 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:02:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:02:19 localhost systemd[297067]: Created slice User Background Tasks Slice. Dec 2 05:02:19 localhost systemd[297067]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 05:02:19 localhost podman[306490]: 2025-12-02 10:02:19.092076047 +0000 UTC m=+0.087506104 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:02:19 localhost systemd[297067]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 05:02:19 localhost podman[306490]: 2025-12-02 10:02:19.106872426 +0000 UTC m=+0.102302473 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd) Dec 2 05:02:19 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:02:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:27 localhost sshd[306511]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:02:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:29.038 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:02:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:29.040 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:02:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:32 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:33 localhost podman[239757]: time="2025-12-02T10:02:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:02:33 localhost podman[239757]: @ - - [02/Dec/2025:10:02:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:02:33 localhost podman[239757]: @ - - [02/Dec/2025:10:02:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19188 "" "Go-http-client/1.1" Dec 2 05:02:34 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:02:35 localhost podman[306513]: 2025-12-02 10:02:35.093847205 +0000 UTC m=+0.087953518 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:02:35 localhost podman[306513]: 2025-12-02 10:02:35.102732925 +0000 UTC m=+0.096839198 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:02:35 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:02:35 localhost podman[306515]: 2025-12-02 10:02:35.154876975 +0000 UTC m=+0.138140399 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller) Dec 2 05:02:35 localhost podman[306515]: 2025-12-02 10:02:35.196188458 +0000 UTC m=+0.179451842 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:02:35 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:02:35 localhost podman[306512]: 2025-12-02 10:02:35.244646907 +0000 UTC m=+0.239706229 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:02:35 localhost podman[306512]: 2025-12-02 10:02:35.254502946 +0000 UTC m=+0.249562238 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:02:35 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:02:35 localhost podman[306514]: 2025-12-02 10:02:35.204289883 +0000 UTC m=+0.192628051 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:02:35 localhost podman[306514]: 2025-12-02 10:02:35.338033669 +0000 UTC m=+0.326371837 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:02:35 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:02:36 localhost ovn_metadata_agent[159477]: 2025-12-02 10:02:36.042 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:02:36 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:02:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:02:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:38 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:02:38.771 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:38Z, description=, device_id=f7309812-362b-4bd1-84da-e909158b6cbe, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d5a7ee3e-3c8d-4f8c-ad01-26038c29d245, ip_allocation=immediate, mac_address=fa:16:3e:41:46:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=189, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:02:38Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:02:38 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:38 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:02:38 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:02:38 localhost podman[306612]: 2025-12-02 10:02:38.969236646 +0000 UTC m=+0.054583977 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:02:38 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:02:39 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:02:39.210 262347 INFO neutron.agent.dhcp.agent [None req-13bf1048-a70a-4e37-9d9f-5b7e1b84444c - - - - - -] DHCP configuration for ports {'d5a7ee3e-3c8d-4f8c-ad01-26038c29d245'} is completed#033[00m Dec 2 05:02:40 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:42 localhost openstack_network_exporter[241816]: ERROR 10:02:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:42 localhost openstack_network_exporter[241816]: ERROR 10:02:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:42 localhost openstack_network_exporter[241816]: ERROR 10:02:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:02:42 localhost openstack_network_exporter[241816]: ERROR 10:02:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:02:42 localhost openstack_network_exporter[241816]: Dec 2 05:02:42 localhost openstack_network_exporter[241816]: ERROR 10:02:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:02:42 localhost openstack_network_exporter[241816]: Dec 2 05:02:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:42 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:02:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:02:44 localhost systemd[1]: tmp-crun.IZswFC.mount: Deactivated successfully. Dec 2 05:02:44 localhost podman[306634]: 2025-12-02 10:02:44.038767583 +0000 UTC m=+0.048804261 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-type=git) Dec 2 05:02:44 localhost podman[306633]: 2025-12-02 10:02:44.23459514 +0000 UTC m=+0.246299738 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:02:44 localhost podman[306634]: 2025-12-02 10:02:44.243175781 +0000 UTC m=+0.253212449 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64) Dec 2 05:02:44 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:02:44 localhost podman[306633]: 2025-12-02 10:02:44.423609031 +0000 UTC m=+0.435313629 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:02:44 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:02:44 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.724 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.725 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.745 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.869 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.870 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.876 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:02:46 localhost nova_compute[281045]: 2025-12-02 10:02:46.876 281049 INFO nova.compute.claims [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Claim successful on node np0005541914.localdomain#033[00m Dec 2 05:02:46 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:47 localhost nova_compute[281045]: 2025-12-02 10:02:47.012 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:02:47 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3473161244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:02:47 localhost nova_compute[281045]: 2025-12-02 10:02:47.451 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:47 localhost nova_compute[281045]: 2025-12-02 10:02:47.459 281049 DEBUG nova.compute.provider_tree [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:02:47 localhost nova_compute[281045]: 2025-12-02 10:02:47.728 281049 DEBUG nova.scheduler.client.report [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.123300) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768123390, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1552, "num_deletes": 251, "total_data_size": 2354823, "memory_usage": 2499192, "flush_reason": "Manual Compaction"} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768137907, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1534273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15495, "largest_seqno": 17042, "table_properties": {"data_size": 1528251, "index_size": 3300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13615, "raw_average_key_size": 20, "raw_value_size": 1515825, "raw_average_value_size": 2321, "num_data_blocks": 142, "num_entries": 653, "num_filter_entries": 653, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669660, "oldest_key_time": 1764669660, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14656 microseconds, and 5885 cpu microseconds. Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137968) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1534273 bytes OK Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137997) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139834) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139851) EVENT_LOG_v1 {"time_micros": 1764669768139846, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139869) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2347488, prev total WAL file size 2347488, number of live WAL files 2. Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.140737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1498KB)], [21(17MB)] Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768140860, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19816931, "oldest_snapshot_seqno": -1} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12015 keys, 17167680 bytes, temperature: kUnknown Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768240801, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 17167680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17100154, "index_size": 36385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 323674, "raw_average_key_size": 26, "raw_value_size": 16896089, "raw_average_value_size": 1406, "num_data_blocks": 1378, "num_entries": 12015, "num_filter_entries": 12015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.241250) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 17167680 bytes Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.243425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.0 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.4 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(24.1) write-amplify(11.2) OK, records in: 12547, records dropped: 532 output_compression: NoCompression Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.243482) EVENT_LOG_v1 {"time_micros": 1764669768243444, "job": 10, "event": "compaction_finished", "compaction_time_micros": 100071, "compaction_time_cpu_micros": 55937, "output_level": 6, "num_output_files": 1, "total_output_size": 17167680, "num_input_records": 12547, "num_output_records": 12015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768243902, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768247284, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.140499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.429 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.430 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.500 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.529 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.554 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.667 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.670 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.671 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating image(s)#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.705 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.741 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.787 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.793 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:48 localhost nova_compute[281045]: 2025-12-02 10:02:48.794 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:48 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:49 localhost nova_compute[281045]: 2025-12-02 10:02:49.906 281049 DEBUG nova.virt.libvirt.imagebackend [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 2 05:02:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:02:50 localhost podman[306753]: 2025-12-02 10:02:50.478615067 +0000 UTC m=+0.481701576 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:02:50 localhost podman[306753]: 2025-12-02 10:02:50.725518454 +0000 UTC m=+0.728605013 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:02:50 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail Dec 2 05:02:50 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.151 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.219 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.221 281049 DEBUG nova.virt.images [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] d85e840d-fa56-497b-b5bd-b49584d3e97a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.222 281049 DEBUG nova.privsep.utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.223 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.404 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.407 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.483 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.484 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 3.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.511 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:52 localhost nova_compute[281045]: 2025-12-02 10:02:52.515 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:52 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.205 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.302 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] resizing rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.467 281049 DEBUG nova.objects.instance [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'migration_context' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.775 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.775 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ensure instance console log exists: /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.776 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.777 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.777 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.780 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.788 281049 WARNING nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.791 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.792 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.794 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.795 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.796 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.796 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.797 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.797 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.798 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.798 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.799 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.799 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.800 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.800 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.801 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.801 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.807 281049 DEBUG nova.privsep.utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 2 05:02:53 localhost nova_compute[281045]: 2025-12-02 10:02:53.808 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:02:54 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2378891610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.281 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.308 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.311 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:02:54 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3529410076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.783 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.787 281049 DEBUG nova.objects.instance [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'pci_devices' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.831 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] End _get_guest_xml xml= Dec 2 05:02:54 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:02:54 localhost nova_compute[281045]: instance-00000006 Dec 2 05:02:54 localhost nova_compute[281045]: 131072 Dec 2 05:02:54 localhost nova_compute[281045]: 1 Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-server-2084001492 Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:53 Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: 128 Dec 2 05:02:54 localhost nova_compute[281045]: 1 Dec 2 05:02:54 localhost nova_compute[281045]: 0 Dec 2 05:02:54 localhost nova_compute[281045]: 0 Dec 2 05:02:54 localhost nova_compute[281045]: 1 Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-557689334-project-member Dec 2 05:02:54 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-557689334 Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: RDO Dec 2 05:02:54 localhost nova_compute[281045]: OpenStack Compute Dec 2 05:02:54 localhost nova_compute[281045]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:02:54 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:02:54 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:02:54 localhost nova_compute[281045]: Virtual Machine Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: hvm Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: /dev/urandom Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: Dec 2 05:02:54 localhost nova_compute[281045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.923 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.924 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.925 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Using config drive#033[00m Dec 2 05:02:54 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s Dec 2 05:02:54 localhost nova_compute[281045]: 2025-12-02 10:02:54.960 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:02:54.976 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:54Z, description=, device_id=f36e6078-7c75-4c7a-9ef2-e9c65f9cb32e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4265c039-facc-45ce-8659-ec262cbe782c, ip_allocation=immediate, mac_address=fa:16:3e:24:61:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=263, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:02:54Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:02:55 localhost podman[306993]: 2025-12-02 10:02:55.176750843 +0000 UTC m=+0.050952636 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:02:55 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:02:55 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:02:55 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:02:55 localhost nova_compute[281045]: 2025-12-02 10:02:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:02:55.710 262347 INFO neutron.agent.dhcp.agent [None req-ffbd8990-af5a-4331-a172-6451ebbfcb89 - - - - - -] DHCP configuration for ports {'4265c039-facc-45ce-8659-ec262cbe782c'} is completed#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.001 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating config drive at /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.007 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm0b84ay execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.133 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm0b84ay" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.336 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.341 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.593 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.595 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting local config drive /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config because it was imported into RBD.#033[00m Dec 2 05:02:56 localhost systemd[1]: Started libvirt secret daemon. Dec 2 05:02:56 localhost systemd-machined[202765]: New machine qemu-1-instance-00000006. Dec 2 05:02:56 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000006. Dec 2 05:02:56 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.981 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.982 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.985 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.985 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.988 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance spawned successfully.#033[00m Dec 2 05:02:56 localhost nova_compute[281045]: 2025-12-02 10:02:56.989 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.005 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.011 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.013 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.014 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.014 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.044 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.044 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.045 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Started (Lifecycle Event)#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.088 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.091 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.151 281049 INFO nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 8.48 seconds to spawn the instance on the hypervisor.#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.152 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.162 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.236 281049 INFO nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 10.41 seconds to build instance.#033[00m Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.256 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 10.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e94 e94: 6 total, 6 up, 6 in Dec 2 05:02:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:57 localhost nova_compute[281045]: 2025-12-02 10:02:57.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.718 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.719 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.719 281049 INFO nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelving#033[00m Dec 2 05:02:58 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 79 op/s Dec 2 05:02:58 localhost nova_compute[281045]: 2025-12-02 10:02:58.995 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Dec 2 05:02:59 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e95 e95: 6 total, 6 up, 6 in Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:03:00 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 467 KiB/s rd, 2.5 MiB/s wr, 79 op/s Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.983 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.984 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.984 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:03:00 localhost nova_compute[281045]: 2025-12-02 10:03:00.985 281049 DEBUG nova.objects.instance [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.246 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.421 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.441 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.442 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.442 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.443 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.443 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.540 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.541 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.541 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.542 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.566 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.566 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.567 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:03:01 localhost nova_compute[281045]: 2025-12-02 10:03:01.568 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:02 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/388746291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.061 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.110 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.110 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.230 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11770MB free_disk=41.774322509765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:02 localhost nova_compute[281045]: 2025-12-02 10:03:02.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:02 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s Dec 2 05:03:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:03 localhost nova_compute[281045]: 2025-12-02 10:03:03.342 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance 268e09a3-7abe-4037-a14a-068e7b8a78fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:03:03 localhost nova_compute[281045]: 2025-12-02 10:03:03.343 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:03:03 localhost nova_compute[281045]: 2025-12-02 10:03:03.343 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:03:03 localhost podman[239757]: time="2025-12-02T10:03:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:03:03 localhost podman[239757]: @ - - [02/Dec/2025:10:03:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:03:03 localhost podman[239757]: @ - - [02/Dec/2025:10:03:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19202 "" "Go-http-client/1.1" Dec 2 05:03:03 localhost nova_compute[281045]: 2025-12-02 10:03:03.808 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:03:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:03:04.328 2 INFO neutron.agent.securitygroups_rpc [None req-5c06dfad-89c5-4abc-a7de-de583f339085 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.433 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.434 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.453 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.485 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.529 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:04 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s Dec 2 05:03:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4080154938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.988 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:04 localhost nova_compute[281045]: 2025-12-02 10:03:04.996 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.079 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updated inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.079 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.080 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.113 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.113 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:03:05 localhost nova_compute[281045]: 2025-12-02 10:03:05.645 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:03:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 e96: 6 total, 6 up, 6 in Dec 2 05:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:03:06 localhost systemd[1]: tmp-crun.MYhWpy.mount: Deactivated successfully. Dec 2 05:03:06 localhost podman[307180]: 2025-12-02 10:03:06.150775802 +0000 UTC m=+0.100629773 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:03:06 localhost podman[307179]: 2025-12-02 10:03:06.164810237 +0000 UTC m=+0.146065969 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:03:06 localhost podman[307177]: 2025-12-02 10:03:06.203192891 +0000 UTC m=+0.192249120 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 05:03:06 localhost podman[307177]: 2025-12-02 10:03:06.211804632 +0000 UTC m=+0.200860851 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:03:06 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:03:06 localhost podman[307179]: 2025-12-02 10:03:06.229603761 +0000 UTC m=+0.210859463 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:03:06 localhost podman[307180]: 2025-12-02 10:03:06.236775999 +0000 UTC m=+0.186630190 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:03:06 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:03:06 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:03:06 localhost podman[307178]: 2025-12-02 10:03:06.116088059 +0000 UTC m=+0.100723755 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:03:06 localhost podman[307178]: 2025-12-02 10:03:06.301009506 +0000 UTC m=+0.285645202 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:03:06 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:03:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:03:06 Dec 2 05:03:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:03:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:03:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', 'vms', 'images', 'backups', 'manila_metadata', '.mgr', 'volumes'] Dec 2 05:03:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:03:06 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 111 op/s Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004817926437744277 of space, bias 1.0, pg target 0.9635852875488554 quantized to 32 (current 32) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:03:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16) Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:03:06 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:03:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:03:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:03:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:03:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:03:07.116 2 INFO neutron.agent.securitygroups_rpc [None req-897aec69-e9e3-465e-bb92-a062d09dda9e 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:07 localhost nova_compute[281045]: 2025-12-02 10:03:07.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:08 localhost ovn_controller[153778]: 2025-12-02T10:03:08Z|00039|memory|INFO|peak resident set size grew 53% in last 2264.5 seconds, from 13080 kB to 20000 kB Dec 2 05:03:08 localhost ovn_controller[153778]: 2025-12-02T10:03:08Z|00040|memory|INFO|idl-cells-OVN_Southbound:7343 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:185 lflow-cache-entries-cache-matches:228 lflow-cache-size-KB:708 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:325 ofctrl_installed_flow_usage-KB:239 ofctrl_sb_flow_ref_usage-KB:127 Dec 2 05:03:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:08.598 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:08Z, description=, device_id=2f08599e-d6d6-408a-a486-e1f5476b437a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4916a0d9-8c51-43e8-a712-508fc8b29742, ip_allocation=immediate, mac_address=fa:16:3e:15:88:70, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=364, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:08Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:03:08 localhost podman[307276]: 2025-12-02 10:03:08.751294628 +0000 UTC m=+0.033395634 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:08 localhost systemd[1]: tmp-crun.tSGOZP.mount: Deactivated successfully. Dec 2 05:03:08 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:03:08 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:08 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s Dec 2 05:03:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:08.937 262347 INFO neutron.agent.dhcp.agent [None req-b931de01-4140-4357-892c-4690f3d0965e - - - - - -] DHCP configuration for ports {'4916a0d9-8c51-43e8-a712-508fc8b29742'} is completed#033[00m Dec 2 05:03:09 localhost nova_compute[281045]: 2025-12-02 10:03:09.041 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Dec 2 05:03:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 1.4 KiB/s wr, 89 op/s Dec 2 05:03:11 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:11.844 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:10Z, description=, device_id=c633bc2a-d8d8-4d52-951c-727821eef4f5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=03bfc4ec-c7c2-4fb4-8f6a-cb567b21dd97, ip_allocation=immediate, mac_address=fa:16:3e:dd:9c:3c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=401, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:10Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:03:12 localhost openstack_network_exporter[241816]: ERROR 10:03:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:12 localhost openstack_network_exporter[241816]: ERROR 10:03:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:03:12 localhost openstack_network_exporter[241816]: ERROR 10:03:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:12 localhost openstack_network_exporter[241816]: ERROR 10:03:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:03:12 localhost openstack_network_exporter[241816]: Dec 2 05:03:12 localhost openstack_network_exporter[241816]: ERROR 10:03:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:03:12 localhost openstack_network_exporter[241816]: Dec 2 05:03:12 localhost podman[307312]: 2025-12-02 10:03:12.146229091 +0000 UTC m=+0.068346043 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:03:12 localhost systemd[1]: tmp-crun.naN9Ii.mount: Deactivated successfully. Dec 2 05:03:12 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses Dec 2 05:03:12 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:12 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:12 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:12.401 262347 INFO neutron.agent.dhcp.agent [None req-e4c035ea-f24e-4c10-9fcb-762f81b0aa77 - - - - - -] DHCP configuration for ports {'03bfc4ec-c7c2-4fb4-8f6a-cb567b21dd97'} is completed#033[00m Dec 2 05:03:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s Dec 2 05:03:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s Dec 2 05:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:03:15 localhost podman[307333]: 2025-12-02 10:03:15.076940509 +0000 UTC m=+0.084886015 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:03:15 localhost podman[307333]: 2025-12-02 10:03:15.085728965 +0000 UTC m=+0.093674521 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:03:15 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:03:15 localhost systemd[1]: tmp-crun.BqIG1R.mount: Deactivated successfully. Dec 2 05:03:15 localhost podman[307334]: 2025-12-02 10:03:15.189013087 +0000 UTC m=+0.194129687 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible) Dec 2 05:03:15 localhost podman[307334]: 2025-12-02 10:03:15.203160586 +0000 UTC m=+0.208277236 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 05:03:15 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:03:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 314 KiB/s rd, 2.3 MiB/s wr, 64 op/s Dec 2 05:03:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:03:17 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:03:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:03:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:03:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:03:17 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:03:17 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:03:17 localhost ceph-mgr[287188]: [progress INFO root] Completed event b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:03:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:03:17 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:03:18 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:03:18 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:03:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 94 op/s Dec 2 05:03:20 localhost nova_compute[281045]: 2025-12-02 10:03:20.089 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Dec 2 05:03:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 93 op/s Dec 2 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:03:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:21.297 262347 INFO neutron.agent.linux.ip_lib [None req-5acd532c-08ca-4a9d-a064-c6abe7e794ca - - - - - -] Device tap955afcdf-dd cannot be used as it has no MAC address#033[00m Dec 2 05:03:21 localhost kernel: device tap955afcdf-dd entered promiscuous mode Dec 2 05:03:21 localhost NetworkManager[5967]: [1764669801.3236] manager: (tap955afcdf-dd): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Dec 2 05:03:21 localhost ovn_controller[153778]: 2025-12-02T10:03:21Z|00041|binding|INFO|Claiming lport 955afcdf-dd99-4cb5-939f-5919590f8e3b for this chassis. Dec 2 05:03:21 localhost ovn_controller[153778]: 2025-12-02T10:03:21Z|00042|binding|INFO|955afcdf-dd99-4cb5-939f-5919590f8e3b: Claiming unknown Dec 2 05:03:21 localhost systemd-udevd[307488]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:21 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:21.337 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edfb5cc295894fc9a8dc307891edb831', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a37dcb8f-9361-4075-bf0e-f19264ce897a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=955afcdf-dd99-4cb5-939f-5919590f8e3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:21 localhost podman[307467]: 2025-12-02 10:03:21.337911861 +0000 UTC m=+0.094702843 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:03:21 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:21.339 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 955afcdf-dd99-4cb5-939f-5919590f8e3b in datapath 5376d097-2da8-4019-8e01-8b89ed4f41cf bound to our chassis#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:21.344 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5376d097-2da8-4019-8e01-8b89ed4f41cf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:21.345 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9bee3e-7826-4a67-9162-eb61b55bff38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_controller[153778]: 2025-12-02T10:03:21Z|00043|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b ovn-installed in OVS Dec 2 05:03:21 localhost ovn_controller[153778]: 2025-12-02T10:03:21Z|00044|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b up in Southbound Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost podman[307467]: 2025-12-02 10:03:21.384100602 +0000 UTC m=+0.140891654 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost journal[229262]: ethtool ioctl error on tap955afcdf-dd: No such device Dec 2 05:03:21 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:03:21 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:03:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:03:22 localhost podman[307564]: Dec 2 05:03:22 localhost podman[307564]: 2025-12-02 10:03:22.308016624 +0000 UTC m=+0.120399922 container create c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:03:22 localhost podman[307564]: 2025-12-02 10:03:22.233184706 +0000 UTC m=+0.045568014 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:03:22 localhost systemd[1]: Started libpod-conmon-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope. Dec 2 05:03:22 localhost systemd[1]: Started libcrun container. Dec 2 05:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1e695aeb824d8fe3d96595a317c2bff704005f07a946fa3818e91a4a7fa6e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:22 localhost podman[307564]: 2025-12-02 10:03:22.37876608 +0000 UTC m=+0.191149338 container init c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:03:22 localhost systemd[1]: tmp-crun.1FTKeb.mount: Deactivated successfully. Dec 2 05:03:22 localhost dnsmasq[307582]: started, version 2.85 cachesize 150 Dec 2 05:03:22 localhost dnsmasq[307582]: DNS service limited to local subnets Dec 2 05:03:22 localhost dnsmasq[307582]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:03:22 localhost dnsmasq[307582]: warning: no upstream servers configured Dec 2 05:03:22 localhost dnsmasq-dhcp[307582]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:03:22 localhost dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 0 addresses Dec 2 05:03:22 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host Dec 2 05:03:22 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts Dec 2 05:03:22 localhost podman[307564]: 2025-12-02 10:03:22.402838469 +0000 UTC m=+0.215221767 container start c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:03:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:22 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully. Dec 2 05:03:22 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 13.513s CPU time. Dec 2 05:03:22 localhost systemd-machined[202765]: Machine qemu-1-instance-00000006 terminated. Dec 2 05:03:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:22.586 262347 INFO neutron.agent.dhcp.agent [None req-04c883c2-f92e-43dc-8ef1-30e0a2586132 - - - - - -] DHCP configuration for ports {'49323d14-7592-4a54-9b77-1ecf72f22e67'} is completed#033[00m Dec 2 05:03:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:22.891 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:22Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8f3263d0-7d7a-4de5-9c22-9ff5f0990009, ip_allocation=immediate, mac_address=fa:16:3e:b7:c0:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=443, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:03:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s Dec 2 05:03:23 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:03:23 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:03:23 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:23 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:23 localhost podman[307605]: 2025-12-02 10:03:23.101049568 +0000 UTC m=+0.060060991 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.106 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance shutdown successfully after 24 seconds.#033[00m Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.113 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.113 281049 DEBUG nova.objects.instance [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.182 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Beginning cold snapshot process#033[00m Dec 2 05:03:23 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:23.619 262347 INFO neutron.agent.dhcp.agent [None req-62eddd7d-7df6-421a-b8f7-a131f5868c24 - - - - - -] DHCP configuration for ports {'8f3263d0-7d7a-4de5-9c22-9ff5f0990009'} is completed#033[00m Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.787 281049 DEBUG nova.virt.libvirt.imagebackend [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No parent info for d85e840d-fa56-497b-b5bd-b49584d3e97a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Dec 2 05:03:23 localhost nova_compute[281045]: 2025-12-02 10:03:23.819 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(c66f7cd910e54661adc476e3131c14ea) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Dec 2 05:03:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e97 e97: 6 total, 6 up, 6 in Dec 2 05:03:24 localhost nova_compute[281045]: 2025-12-02 10:03:24.136 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] cloning vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk@c66f7cd910e54661adc476e3131c14ea to images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Dec 2 05:03:24 localhost nova_compute[281045]: 2025-12-02 10:03:24.514 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] flattening images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Dec 2 05:03:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 74 op/s Dec 2 05:03:25 localhost nova_compute[281045]: 2025-12-02 10:03:25.451 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] removing snapshot(c66f7cd910e54661adc476e3131c14ea) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Dec 2 05:03:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e98 e98: 6 total, 6 up, 6 in Dec 2 05:03:26 localhost nova_compute[281045]: 2025-12-02 10:03:26.245 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(snap) on rbd image(c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Dec 2 05:03:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 640 KiB/s rd, 42 KiB/s wr, 41 op/s Dec 2 05:03:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e99 e99: 6 total, 6 up, 6 in Dec 2 05:03:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.627 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating tmpfile /var/lib/nova/instances/tmp6m2ihysk to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.653 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.704 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.704 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.735 281049 INFO nova.compute.rpcapi [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Dec 2 05:03:29 localhost nova_compute[281045]: 2025-12-02 10:03:29.736 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:29.791 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:25Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=667c36b3-db20-46dc-9ff7-d5dee0a9356b, ip_allocation=immediate, mac_address=fa:16:3e:ff:f8:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:14Z, description=, dns_domain=, id=5376d097-2da8-4019-8e01-8b89ed4f41cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1756550547-network, port_security_enabled=True, project_id=edfb5cc295894fc9a8dc307891edb831, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=417, status=ACTIVE, subnets=['d68ba9ba-0fb5-4191-b45f-f1c149653c02'], tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:19Z, vlan_transparent=None, network_id=5376d097-2da8-4019-8e01-8b89ed4f41cf, port_security_enabled=False, project_id=edfb5cc295894fc9a8dc307891edb831, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=445, status=DOWN, tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:25Z on network 5376d097-2da8-4019-8e01-8b89ed4f41cf#033[00m Dec 2 05:03:29 localhost podman[307785]: 2025-12-02 10:03:29.99619321 +0000 UTC m=+0.054867044 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:03:29 localhost dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 1 addresses Dec 2 05:03:29 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host Dec 2 05:03:29 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts Dec 2 05:03:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 e100: 6 total, 6 up, 6 in Dec 2 05:03:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s Dec 2 05:03:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.335 262347 INFO neutron.agent.dhcp.agent [None req-f75195db-2c41-4bc9-bd43-698d7089f316 - - - - - -] DHCP configuration for ports {'667c36b3-db20-46dc-9ff7-d5dee0a9356b'} is completed#033[00m Dec 2 05:03:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.429 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Snapshot image upload complete#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.430 281049 DEBUG nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.434 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:29Z, description=, device_id=2f998fb5-566b-4272-a579-f71fea3296d4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3fa961f6-eb13-451b-a216-8747851567ad, ip_allocation=immediate, mac_address=fa:16:3e:db:c7:ae, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=446, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:29Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:03:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:32.665 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:32.667 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.685 281049 INFO nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelve offloading#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.695 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.696 281049 DEBUG nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.699 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.700 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.700 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:03:32 localhost nova_compute[281045]: 2025-12-02 10:03:32.767 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:03:32 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:03:32 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:32 localhost systemd[1]: tmp-crun.XPpFpd.mount: Deactivated successfully. Dec 2 05:03:32 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:32 localhost podman[307823]: 2025-12-02 10:03:32.833573609 +0000 UTC m=+0.046275805 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:32 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 9.6 MiB/s rd, 6.9 MiB/s wr, 240 op/s Dec 2 05:03:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.997 262347 INFO neutron.agent.dhcp.agent [None req-db9e299b-a10d-40d7-a058-32d8e31bdd97 - - - - - -] DHCP configuration for ports {'3fa961f6-eb13-451b-a216-8747851567ad'} is completed#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.004 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.020 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.028 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.029 281049 DEBUG nova.objects.instance [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'resources' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:33 localhost podman[239757]: time="2025-12-02T10:03:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.637 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting instance files /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.638 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deletion of /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del complete#033[00m Dec 2 05:03:33 localhost podman[239757]: @ - - [02/Dec/2025:10:03:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1" Dec 2 05:03:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:33.669 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:33 localhost podman[239757]: @ - - [02/Dec/2025:10:03:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19679 "" "Go-http-client/1.1" Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.784 281049 DEBUG nova.virt.libvirt.host [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.785 281049 INFO nova.virt.libvirt.host [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] UEFI support detected#033[00m Dec 2 05:03:33 localhost nova_compute[281045]: 2025-12-02 10:03:33.922 281049 INFO nova.scheduler.client.report [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Deleted allocations for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.202 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.202 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.252 281049 DEBUG oslo_concurrency.processutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:34 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1135731250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.715 281049 DEBUG oslo_concurrency.processutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.721 281049 DEBUG nova.compute.provider_tree [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.792 281049 DEBUG nova.scheduler.client.report [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:03:34 localhost nova_compute[281045]: 2025-12-02 10:03:34.843 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:34 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 5.8 MiB/s wr, 202 op/s Dec 2 05:03:35 localhost nova_compute[281045]: 2025-12-02 10:03:35.095 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 36.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:35 localhost nova_compute[281045]: 2025-12-02 10:03:35.267 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Dec 2 05:03:35 localhost nova_compute[281045]: 2025-12-02 10:03:35.732 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:35 localhost nova_compute[281045]: 2025-12-02 10:03:35.732 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:35 localhost nova_compute[281045]: 2025-12-02 10:03:35.733 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:03:35 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:35.787 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:25Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=667c36b3-db20-46dc-9ff7-d5dee0a9356b, ip_allocation=immediate, mac_address=fa:16:3e:ff:f8:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:14Z, description=, dns_domain=, id=5376d097-2da8-4019-8e01-8b89ed4f41cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1756550547-network, port_security_enabled=True, project_id=edfb5cc295894fc9a8dc307891edb831, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=417, status=ACTIVE, subnets=['d68ba9ba-0fb5-4191-b45f-f1c149653c02'], tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:19Z, vlan_transparent=None, network_id=5376d097-2da8-4019-8e01-8b89ed4f41cf, port_security_enabled=False, project_id=edfb5cc295894fc9a8dc307891edb831, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=445, status=DOWN, tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:25Z on network 5376d097-2da8-4019-8e01-8b89ed4f41cf#033[00m Dec 2 05:03:36 localhost dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 1 addresses Dec 2 05:03:36 localhost podman[307901]: 2025-12-02 10:03:36.00416787 +0000 UTC m=+0.041625112 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:03:36 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host Dec 2 05:03:36 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts Dec 2 05:03:36 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:36.298 262347 INFO neutron.agent.dhcp.agent [None req-9afe9aec-48f5-4778-90cb-3730bedb50bd - - - - - -] DHCP configuration for ports {'667c36b3-db20-46dc-9ff7-d5dee0a9356b'} is completed#033[00m Dec 2 05:03:36 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 MiB/s rd, 4.8 MiB/s wr, 167 op/s Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:03:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:03:37 localhost systemd[1]: tmp-crun.atr2Ud.mount: Deactivated successfully. Dec 2 05:03:37 localhost podman[307921]: 2025-12-02 10:03:37.128924932 +0000 UTC m=+0.128903829 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:03:37 localhost podman[307922]: 2025-12-02 10:03:37.082639059 +0000 UTC m=+0.081791001 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 2 05:03:37 localhost podman[307921]: 2025-12-02 10:03:37.140782352 +0000 UTC m=+0.140761229 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:03:37 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:03:37 localhost podman[307920]: 2025-12-02 10:03:37.192936753 +0000 UTC m=+0.193788236 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:03:37 localhost podman[307920]: 2025-12-02 10:03:37.197514022 +0000 UTC m=+0.198365495 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:03:37 localhost podman[307923]: 2025-12-02 10:03:37.106934285 +0000 UTC m=+0.097258470 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:37 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:03:37 localhost podman[307922]: 2025-12-02 10:03:37.21459258 +0000 UTC m=+0.213744492 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:03:37 localhost podman[307923]: 2025-12-02 10:03:37.236550165 +0000 UTC m=+0.226874290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:03:37 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:03:37 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.253 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.253 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.254 281049 INFO nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Unshelving#033[00m Dec 2 05:03:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:37 localhost podman[308021]: 2025-12-02 10:03:37.440600612 +0000 UTC m=+0.046450919 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:37 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:03:37 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:37 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.463 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.464 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.467 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_requests' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.626 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.745 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.748 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.749 281049 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.919 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.921 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.922 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating instance directory: /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.923 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Ensure instance console log exists: /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.924 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.925 281049 DEBUG nova.virt.libvirt.vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:03:21Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.926 281049 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.928 281049 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.929 281049 DEBUG os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.969 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.969 281049 INFO nova.compute.claims [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Claim successful on node np0005541914.localdomain#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.982 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.983 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.983 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.985 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.985 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:37 localhost nova_compute[281045]: 2025-12-02 10:03:37.989 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.008 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.008 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.009 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.010 281049 INFO oslo.privsep.daemon [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpim6erqeg/privsep.sock']#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.142 281049 DEBUG nova.compute.manager [None req-0f1d3d96-c07f-48e6-9675-f183a17c95f9 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.412 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.616 281049 INFO oslo.privsep.daemon [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.522 308046 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.524 308046 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.526 308046 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.526 308046 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308046#033[00m Dec 2 05:03:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:38 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/84271393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.876 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.883 281049 DEBUG nova.compute.provider_tree [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.890 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31de197b-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.892 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31de197b-ef, col_values=(('external_ids', {'iface-id': '31de197b-ef56-4d2a-9fa2-293715a60004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:bb:bd', 'vm-uuid': '63092ab0-9432-4c74-933e-e9d5428e6162'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.930 281049 DEBUG nova.scheduler.client.report [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.937 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.941 281049 INFO os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.942 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.943 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Dec 2 05:03:38 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 518 KiB/s rd, 2.6 MiB/s wr, 122 op/s Dec 2 05:03:38 localhost nova_compute[281045]: 2025-12-02 10:03:38.967 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.077 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.077 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.078 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.153 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.568 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.585 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.587 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.588 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating image(s)#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.628 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.633 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.684 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.725 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.730 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.731 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.796 281049 DEBUG nova.virt.libvirt.imagebackend [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.876 281049 DEBUG nova.virt.libvirt.imagebackend [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Selected location: {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Dec 2 05:03:39 localhost nova_compute[281045]: 2025-12-02 10:03:39.877 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] cloning images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0@snap to None/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Dec 2 05:03:40 localhost nova_compute[281045]: 2025-12-02 10:03:40.081 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:40 localhost nova_compute[281045]: 2025-12-02 10:03:40.305 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'migration_context' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:40 localhost nova_compute[281045]: 2025-12-02 10:03:40.397 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] flattening vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Dec 2 05:03:40 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 120 op/s Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.286 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.410 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Image rbd:vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.411 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.412 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ensure instance console log exists: /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.412 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.413 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.413 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.416 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:02:58Z,direct_url=,disk_format='raw',id=c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-02T10:03:30Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.420 281049 WARNING nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.422 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.423 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.424 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.425 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.425 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.426 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:02:58Z,direct_url=,disk_format='raw',id=c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-02T10:03:30Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.427 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.428 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.428 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.429 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.429 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.430 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.430 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.432 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.460 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:03:41 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2234913475' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:03:41 localhost nova_compute[281045]: 2025-12-02 10:03:41.963 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.003 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.008 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.033 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Port 31de197b-ef56-4d2a-9fa2-293715a60004 updated with migration profile {'migrating_to': 'np0005541914.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.036 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Dec 2 05:03:42 localhost openstack_network_exporter[241816]: ERROR 10:03:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:42 localhost openstack_network_exporter[241816]: ERROR 10:03:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:42 localhost openstack_network_exporter[241816]: ERROR 10:03:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:03:42 localhost openstack_network_exporter[241816]: ERROR 10:03:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:03:42 localhost openstack_network_exporter[241816]: Dec 2 05:03:42 localhost openstack_network_exporter[241816]: ERROR 10:03:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:03:42 localhost openstack_network_exporter[241816]: Dec 2 05:03:42 localhost sshd[308328]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:03:42 localhost systemd[1]: Created slice User Slice of UID 42436. Dec 2 05:03:42 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Dec 2 05:03:42 localhost systemd-logind[760]: New session 72 of user nova. Dec 2 05:03:42 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Dec 2 05:03:42 localhost systemd[1]: Starting User Manager for UID 42436... Dec 2 05:03:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:03:42 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/91978558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.469 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.472 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_devices' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.492 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] End _get_guest_xml xml= Dec 2 05:03:42 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:03:42 localhost nova_compute[281045]: instance-00000006 Dec 2 05:03:42 localhost nova_compute[281045]: 131072 Dec 2 05:03:42 localhost nova_compute[281045]: 1 Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-server-2084001492 Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:41 Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: 128 Dec 2 05:03:42 localhost nova_compute[281045]: 1 Dec 2 05:03:42 localhost nova_compute[281045]: 0 Dec 2 05:03:42 localhost nova_compute[281045]: 0 Dec 2 05:03:42 localhost nova_compute[281045]: 1 Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-557689334-project-member Dec 2 05:03:42 localhost nova_compute[281045]: tempest-UnshelveToHostMultiNodesTest-557689334 Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: RDO Dec 2 05:03:42 localhost nova_compute[281045]: OpenStack Compute Dec 2 05:03:42 localhost nova_compute[281045]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:03:42 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:03:42 localhost nova_compute[281045]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:03:42 localhost nova_compute[281045]: Virtual Machine Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: hvm Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: /dev/urandom Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: Dec 2 05:03:42 localhost nova_compute[281045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:03:42 localhost systemd[308350]: Queued start job for default target Main User Target. Dec 2 05:03:42 localhost systemd[308350]: Created slice User Application Slice. Dec 2 05:03:42 localhost systemd[308350]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 05:03:42 localhost systemd[308350]: Started Daily Cleanup of User's Temporary Directories. Dec 2 05:03:42 localhost systemd[308350]: Reached target Paths. Dec 2 05:03:42 localhost systemd[308350]: Reached target Timers. Dec 2 05:03:42 localhost systemd[308350]: Starting D-Bus User Message Bus Socket... Dec 2 05:03:42 localhost systemd[308350]: Starting Create User's Volatile Files and Directories... Dec 2 05:03:42 localhost systemd[308350]: Listening on D-Bus User Message Bus Socket. Dec 2 05:03:42 localhost systemd[308350]: Finished Create User's Volatile Files and Directories. Dec 2 05:03:42 localhost systemd[308350]: Reached target Sockets. Dec 2 05:03:42 localhost systemd[308350]: Reached target Basic System. Dec 2 05:03:42 localhost systemd[308350]: Reached target Main User Target. Dec 2 05:03:42 localhost systemd[308350]: Startup finished in 174ms. Dec 2 05:03:42 localhost systemd[1]: Started User Manager for UID 42436. Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.554 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.555 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.556 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Using config drive#033[00m Dec 2 05:03:42 localhost systemd[1]: Started Session 72 of User nova. Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.592 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.626 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.710 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'keypairs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:42 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Dec 2 05:03:42 localhost kernel: device tap31de197b-ef entered promiscuous mode Dec 2 05:03:42 localhost NetworkManager[5967]: [1764669822.7789] manager: (tap31de197b-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Dec 2 05:03:42 localhost systemd-udevd[308403]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.798 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating config drive at /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.803 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu8ptwds execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:42 localhost ovn_controller[153778]: 2025-12-02T10:03:42Z|00045|binding|INFO|Claiming lport 31de197b-ef56-4d2a-9fa2-293715a60004 for this additional chassis. Dec 2 05:03:42 localhost ovn_controller[153778]: 2025-12-02T10:03:42Z|00046|binding|INFO|31de197b-ef56-4d2a-9fa2-293715a60004: Claiming fa:16:3e:8f:bb:bd 10.100.0.4 Dec 2 05:03:42 localhost ovn_controller[153778]: 2025-12-02T10:03:42Z|00047|binding|INFO|Claiming lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 for this additional chassis. Dec 2 05:03:42 localhost ovn_controller[153778]: 2025-12-02T10:03:42Z|00048|binding|INFO|40590dd1-9250-4409-a2d0-cd4f4774bfc8: Claiming fa:16:3e:51:01:78 19.80.0.123 Dec 2 05:03:42 localhost NetworkManager[5967]: [1764669822.8423] device (tap31de197b-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:03:42 localhost NetworkManager[5967]: [1764669822.8430] device (tap31de197b-ef): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.849 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:42 localhost ovn_controller[153778]: 2025-12-02T10:03:42Z|00049|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 ovn-installed in OVS Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.855 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.861 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:42 localhost systemd-machined[202765]: New machine qemu-2-instance-00000007. Dec 2 05:03:42 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000007. Dec 2 05:03:42 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 178 op/s Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.959 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu8ptwds" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:42 localhost nova_compute[281045]: 2025-12-02 10:03:42.995 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.000 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.169 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.171 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Started (Lifecycle Event)#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.199 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.210 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.211 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting local config drive /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config because it was imported into RBD.#033[00m Dec 2 05:03:43 localhost systemd-machined[202765]: New machine qemu-3-instance-00000006. Dec 2 05:03:43 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000006. Dec 2 05:03:43 localhost dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 0 addresses Dec 2 05:03:43 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host Dec 2 05:03:43 localhost podman[308549]: 2025-12-02 10:03:43.496554525 +0000 UTC m=+0.054981468 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:43 localhost dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts Dec 2 05:03:43 localhost systemd[1]: tmp-crun.yJlvHA.mount: Deactivated successfully. Dec 2 05:03:43 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:43.574 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:43Z, description=, device_id=88a5a4f4-0c8e-40f7-81a0-9e11da229be3, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=093d20f1-e161-409e-b4af-0b70203841d0, ip_allocation=immediate, mac_address=fa:16:3e:83:bd:6f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=479, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:43Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.588 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.588 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.590 281049 DEBUG nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.590 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.593 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance spawned successfully.#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.622 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.624 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.649 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.649 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.650 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Started (Lifecycle Event)#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost kernel: device tap955afcdf-dd left promiscuous mode Dec 2 05:03:43 localhost ovn_controller[153778]: 2025-12-02T10:03:43Z|00050|binding|INFO|Releasing lport 955afcdf-dd99-4cb5-939f-5919590f8e3b from this chassis (sb_readonly=0) Dec 2 05:03:43 localhost ovn_controller[153778]: 2025-12-02T10:03:43Z|00051|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b down in Southbound Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.667 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:43.667 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edfb5cc295894fc9a8dc307891edb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a37dcb8f-9361-4075-bf0e-f19264ce897a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=955afcdf-dd99-4cb5-939f-5919590f8e3b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:43.670 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 955afcdf-dd99-4cb5-939f-5919590f8e3b in datapath 5376d097-2da8-4019-8e01-8b89ed4f41cf unbound from our chassis#033[00m Dec 2 05:03:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:43.673 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5376d097-2da8-4019-8e01-8b89ed4f41cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:43.674 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4836133d-c7ce-47e1-b8e6-82e3122f44fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.680 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.681 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.728 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:03:43 localhost systemd[1]: tmp-crun.oSKOcD.mount: Deactivated successfully. Dec 2 05:03:43 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:03:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:43 localhost podman[308612]: 2025-12-02 10:03:43.815113434 +0000 UTC m=+0.064361122 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.862 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.863 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:03:43 localhost nova_compute[281045]: 2025-12-02 10:03:43.973 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost nova_compute[281045]: 2025-12-02 10:03:44.014 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:44 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:44.094 262347 INFO neutron.agent.dhcp.agent [None req-b596b8e6-a457-42e1-a1fe-aed70f07f94a - - - - - -] DHCP configuration for ports {'093d20f1-e161-409e-b4af-0b70203841d0'} is completed#033[00m Dec 2 05:03:44 localhost systemd[1]: session-72.scope: Deactivated successfully. Dec 2 05:03:44 localhost systemd-logind[760]: Session 72 logged out. Waiting for processes to exit. Dec 2 05:03:44 localhost systemd-logind[760]: Removed session 72. Dec 2 05:03:44 localhost nova_compute[281045]: 2025-12-02 10:03:44.164 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:03:44 localhost nova_compute[281045]: 2025-12-02 10:03:44.184 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] During the sync_power process the instance has moved from host np0005541913.localdomain to host np0005541914.localdomain#033[00m Dec 2 05:03:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e101 e101: 6 total, 6 up, 6 in Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00052|binding|INFO|Claiming lport 31de197b-ef56-4d2a-9fa2-293715a60004 for this chassis. Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00053|binding|INFO|31de197b-ef56-4d2a-9fa2-293715a60004: Claiming fa:16:3e:8f:bb:bd 10.100.0.4 Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00054|binding|INFO|Claiming lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 for this chassis. Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00055|binding|INFO|40590dd1-9250-4409-a2d0-cd4f4774bfc8: Claiming fa:16:3e:51:01:78 19.80.0.123 Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00056|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 up in Southbound Dec 2 05:03:44 localhost ovn_controller[153778]: 2025-12-02T10:03:44Z|00057|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 up in Southbound Dec 2 05:03:44 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:44.912 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:44.914 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:44.915 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f bound to our chassis#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:44.916 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:03:44 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s Dec 2 05:03:44 localhost neutron_sriov_agent[255428]: 2025-12-02 10:03:44.954 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-a1f7258b-8365-4eb2-997c-eb7bece0a428 req-b7b5c7b5-3f24-45f3-b756-f295fdd89115 4ea94a3d730c499a8a661131692645ce 497073c2347a4b2dbbf501873318fbd3 - - default default] This port is not SRIOV, skip binding for port 31de197b-ef56-4d2a-9fa2-293715a60004.#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.056 281049 INFO nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Post operation of migration started#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.432 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[53cb8ace-c4fc-4e76-b403-3bba4dee83e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.433 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62df5f27-c1 in ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.436 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62df5f27-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.436 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[6b643cb7-7925-4781-a209-8f0229697d93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.438 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d6498375-9df2-4119-a29a-11d28b221e8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.465 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[bb913cc3-414d-418c-9240-b8b1cf44c97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.481 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf7e578-a6c8-423d-b315-bf1187aafe5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:03:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:45.486 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpxpfixk9t/privsep.sock']#033[00m Dec 2 05:03:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:03:45 localhost podman[308640]: 2025-12-02 10:03:45.581281633 +0000 UTC m=+0.082417729 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container) Dec 2 05:03:45 localhost podman[308640]: 2025-12-02 10:03:45.592919436 +0000 UTC m=+0.094055532 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.592 281049 DEBUG nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:45 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.609 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:45 localhost systemd[1]: tmp-crun.dsaF3o.mount: Deactivated successfully. Dec 2 05:03:45 localhost podman[308639]: 2025-12-02 10:03:45.646323935 +0000 UTC m=+0.150351789 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:03:45 localhost podman[308639]: 2025-12-02 10:03:45.666858778 +0000 UTC m=+0.170886582 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:03:45 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.680 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 8.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.846 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.869 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.888 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.890 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.890 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.896 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Dec 2 05:03:45 localhost journal[228953]: Domain id=2 name='instance-00000007' uuid=63092ab0-9432-4c74-933e-e9d5428e6162 is tainted: custom-monitor Dec 2 05:03:45 localhost nova_compute[281045]: 2025-12-02 10:03:45.936 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.216 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.218 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxpfixk9t/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.085 308685 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.090 308685 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.094 308685 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.094 308685 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308685#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.221 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ac4a98-fc0a-4833-b943-87502f26bebd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.332 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.507 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.508 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.508 281049 INFO nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelving#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.535 281049 DEBUG nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:46 localhost nova_compute[281045]: 2025-12-02 10:03:46.905 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Dec 2 05:03:46 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.205 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[3e558ab3-0241-4907-90db-57fc04284a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.226 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[54bbfa4e-f5ad-469d-9d22-e510872699a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost NetworkManager[5967]: [1764669827.2279] manager: (tap62df5f27-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Dec 2 05:03:47 localhost systemd-udevd[308695]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.265 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[8373eb8c-b678-41e0-9b5c-f01fb67db209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.269 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[f622a358-6fcc-491b-8432-94725d07c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost NetworkManager[5967]: [1764669827.2923] device (tap62df5f27-c0): carrier: link connected Dec 2 05:03:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c1: link becomes ready Dec 2 05:03:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c0: link becomes ready Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.300 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[1995fddd-0162-4bcf-ae7f-712cf6327251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.325 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[afd2fbe2-e716-4e0c-b461-2240adaf445d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199145, 'reachable_time': 40551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308715, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.347 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac4a4d5-7efe-49f1-b050-8d5e8457204b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:df9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1199145, 'tstamp': 1199145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308716, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.369 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4055d988-0cba-44a2-8af9-dc566865bf18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199145, 'reachable_time': 40551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308717, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.403 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[991849b5-b088-472a-892c-6daf5bf1513e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.464 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[47e410bf-f0f5-4349-9bc9-073a2796ff05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.467 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.467 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.468 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62df5f27-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.523 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost kernel: device tap62df5f27-c0 entered promiscuous mode Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.527 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.528 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62df5f27-c0, col_values=(('external_ids', {'iface-id': 'ea045be8-e121-4ff5-bb82-2a757b7ce736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost ovn_controller[153778]: 2025-12-02T10:03:47Z|00058|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.542 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.543 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[12e9c2dc-58e7-4633-a92b-7798d1659aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.545 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: global Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: user root Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: group root Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: log global Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:03:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:47.546 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'env', 'PROCESS_TAG=haproxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.912 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.917 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:47 localhost nova_compute[281045]: 2025-12-02 10:03:47.936 281049 DEBUG nova.objects.instance [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 2 05:03:47 localhost podman[308750]: Dec 2 05:03:47 localhost podman[308750]: 2025-12-02 10:03:47.977937089 +0000 UTC m=+0.086968477 container create aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:48 localhost podman[308750]: 2025-12-02 10:03:47.92552956 +0000 UTC m=+0.034560908 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:03:48 localhost systemd[1]: Started libpod-conmon-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope. Dec 2 05:03:48 localhost systemd[1]: Started libcrun container. Dec 2 05:03:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae3f6c85df8190497f8b25c7653b5a855ddd3a7711f745ce3c2dd3a00d6dcd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:48 localhost podman[308750]: 2025-12-02 10:03:48.084340206 +0000 UTC m=+0.193371564 container init aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:03:48 localhost podman[308750]: 2025-12-02 10:03:48.092163152 +0000 UTC m=+0.201194500 container start aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:48 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE] (308768) : New worker (308770) forked Dec 2 05:03:48 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE] (308768) : Loading success. Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.147 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.152 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3673812c-f461-4e86-831f-b7a7821f4bda#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.163 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[80c012f6-e04d-46c9-8da3-ccfb64dcab5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.164 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3673812c-f1 in ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.167 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3673812c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.168 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[04a72069-ce94-4d4d-8f0d-0d83c4a6acd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.169 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1648ef10-0358-4276-abd1-456e87b66116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.189 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[3786c921-124b-4e03-b222-aab41f658bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.203 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[aa39951c-2043-45f9-b832-d0b348255b06]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.230 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[cd276ffc-37df-4d68-a5ac-86bc046736d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost systemd-udevd[308709]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:48 localhost NetworkManager[5967]: [1764669828.2418] manager: (tap3673812c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/18) Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.243 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[683abe99-1a95-4a53-91d5-f19712101fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.274 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c27e57-45e7-43c4-927f-043f720fda61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.280 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[14a225f8-08f9-4183-a5df-984138ca48c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f1: link becomes ready Dec 2 05:03:48 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f0: link becomes ready Dec 2 05:03:48 localhost NetworkManager[5967]: [1764669828.3003] device (tap3673812c-f0): carrier: link connected Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.303 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[955941e4-8cc7-4f18-819c-a26834d98366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.318 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e03b6054-4904-4a22-bf06-2cc827321138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199246, 'reachable_time': 37960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308790, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.332 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3eca2dfc-02fb-4318-8eca-1c534ca076c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:13c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1199246, 'tstamp': 1199246}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308791, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.350 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1f86e8fe-136d-45cd-87ea-a819a8d4a148]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199246, 'reachable_time': 37960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308792, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.380 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[edc2c947-879f-4652-b0ea-9b06eb79eabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.433 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4502348a-f214-4042-9006-46302d753500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.435 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.436 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.437 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3673812c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:48 localhost kernel: device tap3673812c-f0 entered promiscuous mode Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.440 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.443 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.445 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3673812c-f0, col_values=(('external_ids', {'iface-id': 'ba8757f7-1076-4bc0-8968-1084ffa48766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.447 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost ovn_controller[153778]: 2025-12-02T10:03:48Z|00059|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.450 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.452 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d0032f69-4d11-4e7c-ab11-0808b4f0a79b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.453 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: global Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-3673812c-f461-4e86-831f-b7a7821f4bda Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: user root Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: group root Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: log global Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID 3673812c-f461-4e86-831f-b7a7821f4bda Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:03:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:48.454 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'env', 'PROCESS_TAG=haproxy-3673812c-f461-4e86-831f-b7a7821f4bda', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3673812c-f461-4e86-831f-b7a7821f4bda.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.457 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:03:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:03:48 localhost ovn_controller[153778]: 2025-12-02T10:03:48Z|00060|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:48 localhost ovn_controller[153778]: 2025-12-02T10:03:48Z|00061|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:48 localhost podman[308834]: 2025-12-02 10:03:48.866974006 +0000 UTC m=+0.096186137 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:48 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:03:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost podman[308849]: Dec 2 05:03:48 localhost podman[308849]: 2025-12-02 10:03:48.947426466 +0000 UTC m=+0.105723907 container create 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:48 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s Dec 2 05:03:48 localhost nova_compute[281045]: 2025-12-02 10:03:48.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost systemd[1]: Started libpod-conmon-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope. Dec 2 05:03:49 localhost systemd[1]: Started libcrun container. Dec 2 05:03:49 localhost podman[308849]: 2025-12-02 10:03:48.895032707 +0000 UTC m=+0.053330158 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:03:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dd055fe24137a82f7d8c9b97ce73c529332056d65bf867e145475b505e1900/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:49 localhost podman[308849]: 2025-12-02 10:03:49.027554644 +0000 UTC m=+0.185852085 container init 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:03:49 localhost podman[308849]: 2025-12-02 10:03:49.039285591 +0000 UTC m=+0.197583032 container start 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:03:49 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE] (308877) : New worker (308879) forked Dec 2 05:03:49 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE] (308877) : Loading success. Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.201 281049 DEBUG nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:49 localhost nova_compute[281045]: 2025-12-02 10:03:49.203 281049 WARNING nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state None.#033[00m Dec 2 05:03:49 localhost dnsmasq[307582]: exiting on receipt of SIGTERM Dec 2 05:03:49 localhost podman[308905]: 2025-12-02 10:03:49.664639321 +0000 UTC m=+0.069934921 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:03:49 localhost systemd[1]: libpod-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope: Deactivated successfully. Dec 2 05:03:49 localhost podman[308919]: 2025-12-02 10:03:49.742362547 +0000 UTC m=+0.060691161 container died c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:03:49 localhost podman[308919]: 2025-12-02 10:03:49.772827321 +0000 UTC m=+0.091155875 container cleanup c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:03:49 localhost systemd[1]: libpod-conmon-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope: Deactivated successfully. Dec 2 05:03:49 localhost podman[308920]: 2025-12-02 10:03:49.844479544 +0000 UTC m=+0.159081605 container remove c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:03:49 localhost systemd[1]: tmp-crun.27am72.mount: Deactivated successfully. Dec 2 05:03:49 localhost systemd[1]: var-lib-containers-storage-overlay-fb1e695aeb824d8fe3d96595a317c2bff704005f07a946fa3818e91a4a7fa6e0-merged.mount: Deactivated successfully. Dec 2 05:03:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: run-netns-qdhcp\x2d5376d097\x2d2da8\x2d4019\x2d8e01\x2d8b89ed4f41cf.mount: Deactivated successfully. Dec 2 05:03:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:50.121 262347 INFO neutron.agent.dhcp.agent [None req-ba36a1e8-81e8-4404-a121-231e9de97bd4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.166 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.167 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.167 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.168 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.169 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.171 281049 INFO nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Terminating instance#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.172 281049 DEBUG nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 2 05:03:50 localhost kernel: device tap31de197b-ef left promiscuous mode Dec 2 05:03:50 localhost NetworkManager[5967]: [1764669830.2192] device (tap31de197b-ef): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.221 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00062|binding|INFO|Releasing lport 31de197b-ef56-4d2a-9fa2-293715a60004 from this chassis (sb_readonly=0) Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.227 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00063|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 down in Southbound Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00064|binding|INFO|Releasing lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 from this chassis (sb_readonly=0) Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00065|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 down in Southbound Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00066|binding|INFO|Removing iface tap31de197b-ef ovn-installed in OVS Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.232 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00067|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:50 localhost ovn_controller[153778]: 2025-12-02T10:03:50Z|00068|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.243 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '12', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.247 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.249 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f unbound from our chassis#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.257 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.259 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b63b0ef-0e75-40d7-8531-748adb2bc0c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.260 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace which is not needed anymore#033[00m Dec 2 05:03:50 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully. Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.267 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost systemd-machined[202765]: Machine qemu-2-instance-00000007 terminated. Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.300 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE] (308768) : haproxy version is 2.8.14-c23fe91 Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE] (308768) : path to executable is /usr/sbin/haproxy Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [ALERT] (308768) : Current worker (308770) exited with code 143 (Terminated) Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [WARNING] (308768) : All workers exited. Exiting... (0) Dec 2 05:03:50 localhost systemd[1]: libpod-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope: Deactivated successfully. Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.412 281049 INFO nova.virt.libvirt.driver [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance destroyed successfully.#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.413 281049 DEBUG nova.objects.instance [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lazy-loading 'resources' on Instance uuid 63092ab0-9432-4c74-933e-e9d5428e6162 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:50 localhost podman[308969]: 2025-12-02 10:03:50.417989333 +0000 UTC m=+0.057970479 container died aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.432 281049 DEBUG nova.virt.libvirt.vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:03:47Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.432 281049 DEBUG nova.network.os_vif_util [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.433 281049 DEBUG nova.network.os_vif_util [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.433 281049 DEBUG os_vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.437 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.437 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31de197b-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.439 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.443 281049 INFO os_vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')#033[00m Dec 2 05:03:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:50.454 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:03:50 localhost podman[308969]: 2025-12-02 10:03:50.45585903 +0000 UTC m=+0.095840116 container cleanup aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:03:50 localhost podman[308993]: 2025-12-02 10:03:50.486945593 +0000 UTC m=+0.061724813 container cleanup aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:03:50 localhost systemd[1]: libpod-conmon-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope: Deactivated successfully. Dec 2 05:03:50 localhost podman[309021]: 2025-12-02 10:03:50.536901207 +0000 UTC m=+0.064377342 container remove aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.542 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[efa58c88-7def-41e3-baa6-20d6999b30f4]: (4, ('Tue Dec 2 10:03:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485)\naab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485\nTue Dec 2 10:03:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485)\naab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.544 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[172f4331-dd9f-4f9a-b0c8-cd3e226cbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.545 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.548 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost kernel: device tap62df5f27-c0 left promiscuous mode Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.554 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.559 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[024ad253-86d0-45cf-a175-5b3cd975931a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.576 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e4190c2d-037a-4550-9bdc-b431a3ec302a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.578 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[85116c96-893e-4541-9389-59648ae05b6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.591 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[29d969e2-9d75-482c-94ed-10a224ddf94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199136, 'reachable_time': 42765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309039, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.600 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.600 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[822ba53e-a82c-4cfa-bedc-b7323ce74e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.602 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.608 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3673812c-f461-4e86-831f-b7a7821f4bda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.610 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ed53ef49-54b9-4ec5-b669-daf69239b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.612 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace which is not needed anymore#033[00m Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE] (308877) : haproxy version is 2.8.14-c23fe91 Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE] (308877) : path to executable is /usr/sbin/haproxy Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [WARNING] (308877) : Exiting Master process... Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [ALERT] (308877) : Current worker (308879) exited with code 143 (Terminated) Dec 2 05:03:50 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [WARNING] (308877) : All workers exited. Exiting... (0) Dec 2 05:03:50 localhost systemd[1]: libpod-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope: Deactivated successfully. Dec 2 05:03:50 localhost podman[309058]: 2025-12-02 10:03:50.757491366 +0000 UTC m=+0.059787223 container died 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 e102: 6 total, 6 up, 6 in Dec 2 05:03:50 localhost podman[309058]: 2025-12-02 10:03:50.791219999 +0000 UTC m=+0.093515796 container cleanup 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:03:50 localhost podman[309072]: 2025-12-02 10:03:50.821836287 +0000 UTC m=+0.056546445 container cleanup 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:03:50 localhost systemd[1]: libpod-conmon-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope: Deactivated successfully. Dec 2 05:03:50 localhost podman[309085]: 2025-12-02 10:03:50.878486334 +0000 UTC m=+0.069090286 container remove 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.881 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5f85fcc5-5599-4e7f-91f9-ac195309a115]: (4, ('Tue Dec 2 10:03:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6)\n45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6\nTue Dec 2 10:03:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6)\n45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.883 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a94c5baa-16f3-448b-a29b-ccaf4ff9aebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.884 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.886 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost kernel: device tap3673812c-f0 left promiscuous mode Dec 2 05:03:50 localhost nova_compute[281045]: 2025-12-02 10:03:50.895 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.898 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[106c8d8a-6795-434b-b747-127f5a4fa473]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.912 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[20650ce1-9078-4395-ad67-14f6ff78185c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.913 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1612d7-cf00-4479-a11a-8e736571b0f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.927 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d717fc77-74ca-4fb0-b35c-1c26a417baf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199239, 'reachable_time': 22157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309108, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.929 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:03:50 localhost ovn_metadata_agent[159477]: 2025-12-02 10:03:50.929 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[4880a318-4272-4105-a59d-9abc8f7b6be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:50 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v126: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 152 op/s Dec 2 05:03:50 localhost systemd[1]: var-lib-containers-storage-overlay-c2dd055fe24137a82f7d8c9b97ce73c529332056d65bf867e145475b505e1900-merged.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: run-netns-ovnmeta\x2d3673812c\x2df461\x2d4e86\x2d831f\x2db7a7821f4bda.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: var-lib-containers-storage-overlay-6ae3f6c85df8190497f8b25c7653b5a855ddd3a7711f745ce3c2dd3a00d6dcd7-merged.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:50 localhost systemd[1]: run-netns-ovnmeta\x2d62df5f27\x2dc8d9\x2d4d79\x2d9ad6\x2d2f32e63bf47f.mount: Deactivated successfully. Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.037 281049 INFO nova.virt.libvirt.driver [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deleting instance files /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.037 281049 INFO nova.virt.libvirt.driver [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deletion of /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del complete#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.103 281049 INFO nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.103 281049 DEBUG oslo.service.loopingcall [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.103 281049 DEBUG nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.104 281049 DEBUG nova.network.neutron [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 2 05:03:51 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:03:51.172 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.268 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.269 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.271 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.271 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.272 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.272 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.273 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.273 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.274 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.274 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.276 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.276 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.277 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.277 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.279 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.#033[00m Dec 2 05:03:51 localhost nova_compute[281045]: 2025-12-02 10:03:51.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:03:52 localhost systemd[1]: tmp-crun.RfhleY.mount: Deactivated successfully. Dec 2 05:03:52 localhost podman[309109]: 2025-12-02 10:03:52.076293085 +0000 UTC m=+0.085585054 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0) Dec 2 05:03:52 localhost podman[309109]: 2025-12-02 10:03:52.179533769 +0000 UTC m=+0.188825718 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:52 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:03:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:52 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 192 op/s Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.552 281049 DEBUG nova.network.neutron [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.571 281049 INFO nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 2.47 seconds to deallocate network for instance.#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.625 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.625 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.627 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.672 281049 INFO nova.scheduler.client.report [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Deleted allocations for instance 63092ab0-9432-4c74-933e-e9d5428e6162#033[00m Dec 2 05:03:53 localhost nova_compute[281045]: 2025-12-02 10:03:53.763 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:54 localhost systemd[1]: Stopping User Manager for UID 42436... Dec 2 05:03:54 localhost systemd[308350]: Activating special unit Exit the Session... Dec 2 05:03:54 localhost systemd[308350]: Stopped target Main User Target. Dec 2 05:03:54 localhost systemd[308350]: Stopped target Basic System. Dec 2 05:03:54 localhost systemd[308350]: Stopped target Paths. Dec 2 05:03:54 localhost systemd[308350]: Stopped target Sockets. Dec 2 05:03:54 localhost systemd[308350]: Stopped target Timers. Dec 2 05:03:54 localhost systemd[308350]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 05:03:54 localhost systemd[308350]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 05:03:54 localhost systemd[308350]: Closed D-Bus User Message Bus Socket. Dec 2 05:03:54 localhost systemd[308350]: Stopped Create User's Volatile Files and Directories. Dec 2 05:03:54 localhost systemd[308350]: Removed slice User Application Slice. Dec 2 05:03:54 localhost systemd[308350]: Reached target Shutdown. Dec 2 05:03:54 localhost systemd[308350]: Finished Exit the Session. Dec 2 05:03:54 localhost systemd[308350]: Reached target Exit the Session. Dec 2 05:03:54 localhost systemd[1]: user@42436.service: Deactivated successfully. Dec 2 05:03:54 localhost systemd[1]: Stopped User Manager for UID 42436. Dec 2 05:03:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Dec 2 05:03:54 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Dec 2 05:03:54 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Dec 2 05:03:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Dec 2 05:03:54 localhost systemd[1]: Removed slice User Slice of UID 42436. Dec 2 05:03:54 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s Dec 2 05:03:55 localhost nova_compute[281045]: 2025-12-02 10:03:55.440 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:03:55.498 2 INFO neutron.agent.securitygroups_rpc [None req-e52c4e8f-c1be-4de8-b00c-43719449fd5b 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:56 localhost nova_compute[281045]: 2025-12-02 10:03:56.384 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:56 localhost nova_compute[281045]: 2025-12-02 10:03:56.538 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:56 localhost nova_compute[281045]: 2025-12-02 10:03:56.594 281049 DEBUG nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Dec 2 05:03:56 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s Dec 2 05:03:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:57 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses Dec 2 05:03:57 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:03:57 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:03:57 localhost podman[309145]: 2025-12-02 10:03:57.750064297 +0000 UTC m=+0.056537549 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:58 localhost nova_compute[281045]: 2025-12-02 10:03:58.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:58 localhost nova_compute[281045]: 2025-12-02 10:03:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:58 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s Dec 2 05:03:58 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully. Dec 2 05:03:58 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 13.130s CPU time. Dec 2 05:03:58 localhost systemd-machined[202765]: Machine qemu-3-instance-00000006 terminated. Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.610 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance shutdown successfully after 13 seconds.#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.617 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.617 281049 DEBUG nova.objects.instance [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.700 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Beginning cold snapshot process#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.715 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.879 281049 DEBUG nova.virt.libvirt.imagebackend [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No parent info for d85e840d-fa56-497b-b5bd-b49584d3e97a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Dec 2 05:03:59 localhost nova_compute[281045]: 2025-12-02 10:03:59.921 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(564b67ddbec84407a44d7f8337429375) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Dec 2 05:04:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e103 e103: 6 total, 6 up, 6 in Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.284 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] cloning vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk@564b67ddbec84407a44d7f8337429375 to images/0e87d55f-56a4-4da8-9198-c633785685ee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.443 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:04:00 localhost nova_compute[281045]: 2025-12-02 10:04:00.707 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] flattening images/0e87d55f-56a4-4da8-9198-c633785685ee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Dec 2 05:04:00 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.386 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG nova.objects.instance [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.572 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.675 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] removing snapshot(564b67ddbec84407a44d7f8337429375) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.809 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.829 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.830 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:01 localhost nova_compute[281045]: 2025-12-02 10:04:01.830 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:01 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:01.896 2 INFO neutron.agent.securitygroups_rpc [None req-d7c6b922-a31a-45e0-b3f4-c5bd99f50015 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:04:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e104 e104: 6 total, 6 up, 6 in Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.298 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(snap) on rbd image(0e87d55f-56a4-4da8-9198-c633785685ee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Dec 2 05:04:02 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:02.301 262347 INFO neutron.agent.linux.ip_lib [None req-42b907e5-4841-49aa-ab03-f3e6a1a35935 - - - - - -] Device tap7466a138-c4 cannot be used as it has no MAC address#033[00m Dec 2 05:04:02 localhost kernel: device tap7466a138-c4 entered promiscuous mode Dec 2 05:04:02 localhost NetworkManager[5967]: [1764669842.3671] manager: (tap7466a138-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Dec 2 05:04:02 localhost ovn_controller[153778]: 2025-12-02T10:04:02Z|00069|binding|INFO|Claiming lport 7466a138-c45f-458b-a865-8c5d3b978b39 for this chassis. Dec 2 05:04:02 localhost ovn_controller[153778]: 2025-12-02T10:04:02Z|00070|binding|INFO|7466a138-c45f-458b-a865-8c5d3b978b39: Claiming unknown Dec 2 05:04:02 localhost systemd-udevd[309319]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:02.393 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1db4f455ea047e3b37458f6d2c5e699', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=507aa10c-3500-464e-ac80-7fecb3c41257, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7466a138-c45f-458b-a865-8c5d3b978b39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:02.395 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 7466a138-c45f-458b-a865-8c5d3b978b39 in datapath e59f1a37-9713-45f0-9ce4-adafcc25b854 bound to our chassis#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.397 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:02.398 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 854baa2d-45b1-472d-99b3-9d9f1dbe8c4b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:02.398 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e59f1a37-9713-45f0-9ce4-adafcc25b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:02.399 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1794f115-8951-4c44-84ef-31a973aa6359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost ovn_controller[153778]: 2025-12-02T10:04:02Z|00071|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 ovn-installed in OVS Dec 2 05:04:02 localhost ovn_controller[153778]: 2025-12-02T10:04:02Z|00072|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 up in Southbound Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.407 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.411 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost journal[229262]: ethtool ioctl error on tap7466a138-c4: No such device Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.438 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.459 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.579 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.580 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.580 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.581 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.581 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:02 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 181 op/s Dec 2 05:04:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:02 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2142671951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:02 localhost nova_compute[281045]: 2025-12-02 10:04:02.994 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.039 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.039 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:03.175 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.199 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.200 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11595MB free_disk=41.70033645629883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e105 e105: 6 total, 6 up, 6 in Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.309 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance 268e09a3-7abe-4037-a14a-068e7b8a78fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.309 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.310 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.358 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:03 localhost podman[309415]: 2025-12-02 10:04:03.359278204 +0000 UTC m=+0.046522761 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:04:03 localhost podman[309415]: Dec 2 05:04:03 localhost podman[309415]: 2025-12-02 10:04:03.492392028 +0000 UTC m=+0.179636525 container create eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 05:04:03 localhost systemd[1]: Started libpod-conmon-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope. Dec 2 05:04:03 localhost systemd[1]: Started libcrun container. Dec 2 05:04:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d093117dd79caf19187febcd7ccef397c254025e14d4a134627538b5ac62e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:03 localhost podman[309415]: 2025-12-02 10:04:03.573163622 +0000 UTC m=+0.260408129 container init eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:03 localhost podman[309415]: 2025-12-02 10:04:03.581186258 +0000 UTC m=+0.268430765 container start eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:04:03 localhost dnsmasq[309453]: started, version 2.85 cachesize 150 Dec 2 05:04:03 localhost dnsmasq[309453]: DNS service limited to local subnets Dec 2 05:04:03 localhost dnsmasq[309453]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:04:03 localhost dnsmasq[309453]: warning: no upstream servers configured Dec 2 05:04:03 localhost dnsmasq-dhcp[309453]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:04:03 localhost dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 0 addresses Dec 2 05:04:03 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host Dec 2 05:04:03 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts Dec 2 05:04:03 localhost podman[239757]: time="2025-12-02T10:04:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:04:03 localhost podman[239757]: @ - - [02/Dec/2025:10:04:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1" Dec 2 05:04:03 localhost podman[239757]: @ - - [02/Dec/2025:10:04:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19678 "" "Go-http-client/1.1" Dec 2 05:04:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:03.813 262347 INFO neutron.agent.dhcp.agent [None req-e3374617-6fae-4cf4-83f9-f5993e6fd367 - - - - - -] DHCP configuration for ports {'09a8ae64-d204-4cfc-97ef-a2500c78fa1a'} is completed#033[00m Dec 2 05:04:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:03 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/682142553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.879 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.886 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.908 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.933 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:04:03 localhost nova_compute[281045]: 2025-12-02 10:04:03.934 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:03.942 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:03Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9f97a067-96bf-4249-b193-b9cbcf841c2f, ip_allocation=immediate, mac_address=fa:16:3e:c2:f0:84, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=548, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:03Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:04.111 2 INFO neutron.agent.securitygroups_rpc [None req-477510e9-c030-4124-bb5e-ce2ad555248a 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:04:04 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:04:04 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:04 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:04 localhost podman[309471]: 2025-12-02 10:04:04.161380761 +0000 UTC m=+0.062006498 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.322 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Snapshot image upload complete#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.322 281049 DEBUG nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.393 281049 INFO nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelve offloading#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.403 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.403 281049 DEBUG nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.408 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.408 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.409 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:04 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:04.435 262347 INFO neutron.agent.dhcp.agent [None req-c5b9814f-9350-44f2-8d86-e75292c7a42b - - - - - -] DHCP configuration for ports {'9f97a067-96bf-4249-b193-b9cbcf841c2f'} is completed#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.477 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.711 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.738 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.748 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.749 281049 DEBUG nova.objects.instance [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'resources' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.908 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:04 localhost nova_compute[281045]: 2025-12-02 10:04:04.908 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:04:04 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.405 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.406 281049 INFO nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.441 281049 DEBUG nova.compute.manager [None req-9e4e8d66-15e5-44dd-a723-2513e261e1b8 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.447 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting instance files /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.447 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deletion of /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del complete#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.450 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.544 281049 INFO nova.scheduler.client.report [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Deleted allocations for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.583 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.584 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:05 localhost nova_compute[281045]: 2025-12-02 10:04:05.608 281049 DEBUG oslo_concurrency.processutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:05 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:05.621 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:05Z, description=, device_id=3138b719-9e61-40de-8133-b237ae970e06, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3420ea4d-964c-418c-b3a2-2a8fc6d1f09f, ip_allocation=immediate, mac_address=fa:16:3e:2e:dd:70, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=560, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:05Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 e106: 6 total, 6 up, 6 in Dec 2 05:04:05 localhost systemd[1]: tmp-crun.pDid0f.mount: Deactivated successfully. Dec 2 05:04:05 localhost podman[309549]: 2025-12-02 10:04:05.842982674 +0000 UTC m=+0.053585068 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:04:05 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:04:05 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:05 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4051220607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.086 281049 DEBUG oslo_concurrency.processutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.092 281049 DEBUG nova.compute.provider_tree [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.125 281049 DEBUG nova.scheduler.client.report [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:06.127 262347 INFO neutron.agent.dhcp.agent [None req-a3156c44-5873-487a-a537-03a254fd7ee1 - - - - - -] DHCP configuration for ports {'3420ea4d-964c-418c-b3a2-2a8fc6d1f09f'} is completed#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.164 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.233 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 19.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:06.358 262347 INFO neutron.agent.linux.ip_lib [None req-d9f24fe9-79c3-4a66-9e57-9eb192b8d7a2 - - - - - -] Device tapde515592-06 cannot be used as it has no MAC address#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.378 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost kernel: device tapde515592-06 entered promiscuous mode Dec 2 05:04:06 localhost NetworkManager[5967]: [1764669846.3857] manager: (tapde515592-06): new Generic device (/org/freedesktop/NetworkManager/Devices/20) Dec 2 05:04:06 localhost ovn_controller[153778]: 2025-12-02T10:04:06Z|00073|binding|INFO|Claiming lport de515592-061d-469f-83fb-52a8d86b335c for this chassis. Dec 2 05:04:06 localhost ovn_controller[153778]: 2025-12-02T10:04:06Z|00074|binding|INFO|de515592-061d-469f-83fb-52a8d86b335c: Claiming unknown Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.386 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost systemd-udevd[309582]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.418 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost ovn_controller[153778]: 2025-12-02T10:04:06Z|00075|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c ovn-installed in OVS Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost journal[229262]: ethtool ioctl error on tapde515592-06: No such device Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost ovn_controller[153778]: 2025-12-02T10:04:06Z|00076|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c up in Southbound Dec 2 05:04:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:06.524 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=de515592-061d-469f-83fb-52a8d86b335c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:06.526 159483 INFO neutron.agent.ovn.metadata.agent [-] Port de515592-061d-469f-83fb-52a8d86b335c in datapath c40d86e4-7101-443b-abce-328f7d1ea40e bound to our chassis#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:06.528 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c40d86e4-7101-443b-abce-328f7d1ea40e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:06.530 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb458ec-3aeb-4bf1-a864-555e021b6fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:04:06 Dec 2 05:04:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:04:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:04:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'manila_data', '.mgr', 'images', 'manila_metadata'] Dec 2 05:04:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:04:06 localhost nova_compute[281045]: 2025-12-02 10:04:06.873 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:06 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006580482708682301 of space, bias 1.0, pg target 1.31609654173646 quantized to 32 (current 32) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007545049466080402 of space, bias 1.0, pg target 1.50146484375 quantized to 32 (current 32) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:04:06 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019400387353433835 quantized to 16 (current 16) Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:04:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:04:07 localhost podman[309653]: Dec 2 05:04:07 localhost podman[309653]: 2025-12-02 10:04:07.37541726 +0000 UTC m=+0.089644817 container create 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:04:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:04:07 localhost podman[309653]: 2025-12-02 10:04:07.332267463 +0000 UTC m=+0.046495050 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:04:07 localhost systemd[1]: Started libpod-conmon-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope. Dec 2 05:04:07 localhost systemd[1]: Started libcrun container. Dec 2 05:04:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffda5ed93ce183a48e515ae2d9c5dd554b9888bdc7086e24a8673d10ea36adfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:07 localhost podman[309653]: 2025-12-02 10:04:07.47197901 +0000 UTC m=+0.186206567 container init 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:07 localhost dnsmasq[309707]: started, version 2.85 cachesize 150 Dec 2 05:04:07 localhost dnsmasq[309707]: DNS service limited to local subnets Dec 2 05:04:07 localhost dnsmasq[309707]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:04:07 localhost dnsmasq[309707]: warning: no upstream servers configured Dec 2 05:04:07 localhost dnsmasq-dhcp[309707]: DHCP, static leases only on 19.80.0.0, lease time 1d Dec 2 05:04:07 localhost dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 0 addresses Dec 2 05:04:07 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host Dec 2 05:04:07 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts Dec 2 05:04:07 localhost podman[309667]: 2025-12-02 10:04:07.518694326 +0000 UTC m=+0.095579030 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:04:07 localhost podman[309653]: 2025-12-02 10:04:07.531214751 +0000 UTC m=+0.245442268 container start 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:04:07 localhost podman[309669]: 2025-12-02 10:04:07.577796204 +0000 UTC m=+0.138895862 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:07 localhost podman[309669]: 2025-12-02 10:04:07.590217116 +0000 UTC m=+0.151316734 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:07 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:04:07 localhost podman[309667]: 2025-12-02 10:04:07.6023697 +0000 UTC m=+0.179254464 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:04:07 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:04:07 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:07.642 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:07Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1, ip_allocation=immediate, mac_address=fa:16:3e:34:09:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:54Z, description=, dns_domain=, id=e59f1a37-9713-45f0-9ce4-adafcc25b854, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-774700713-network, port_security_enabled=True, project_id=b1db4f455ea047e3b37458f6d2c5e699, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45410, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=521, status=ACTIVE, subnets=['63a6053d-0067-412b-8c97-ca7de4cc1f0d'], tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:03:59Z, vlan_transparent=None, network_id=e59f1a37-9713-45f0-9ce4-adafcc25b854, port_security_enabled=False, project_id=b1db4f455ea047e3b37458f6d2c5e699, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=570, status=DOWN, tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:04:07Z on network e59f1a37-9713-45f0-9ce4-adafcc25b854#033[00m Dec 2 05:04:07 localhost podman[309675]: 2025-12-02 10:04:07.690980855 +0000 UTC m=+0.252758525 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:04:07 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:07.701 262347 INFO neutron.agent.dhcp.agent [None req-ccdf9dc0-3f52-470a-a720-dd15054f0053 - - - - - -] DHCP configuration for ports {'60398627-924e-4353-b9ee-b86c24b6fc87'} is completed#033[00m Dec 2 05:04:07 localhost podman[309668]: 2025-12-02 10:04:07.765725323 +0000 UTC m=+0.336127798 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:04:07 localhost podman[309675]: 2025-12-02 10:04:07.773890494 +0000 UTC m=+0.335668174 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:04:07 localhost podman[309668]: 2025-12-02 10:04:07.782782277 +0000 UTC m=+0.353184692 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:04:07 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:04:07 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:04:07 localhost dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 1 addresses Dec 2 05:04:07 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host Dec 2 05:04:07 localhost podman[309768]: 2025-12-02 10:04:07.90836857 +0000 UTC m=+0.064586978 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:07 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts Dec 2 05:04:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:08.169 262347 INFO neutron.agent.dhcp.agent [None req-436baad9-cd96-470f-9d0f-2b02d58116f8 - - - - - -] DHCP configuration for ports {'995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1'} is completed#033[00m Dec 2 05:04:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 204 op/s Dec 2 05:04:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:09.324 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:07Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1, ip_allocation=immediate, mac_address=fa:16:3e:34:09:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:54Z, description=, dns_domain=, id=e59f1a37-9713-45f0-9ce4-adafcc25b854, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-774700713-network, port_security_enabled=True, project_id=b1db4f455ea047e3b37458f6d2c5e699, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45410, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=521, status=ACTIVE, subnets=['63a6053d-0067-412b-8c97-ca7de4cc1f0d'], tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:03:59Z, vlan_transparent=None, network_id=e59f1a37-9713-45f0-9ce4-adafcc25b854, port_security_enabled=False, project_id=b1db4f455ea047e3b37458f6d2c5e699, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=570, status=DOWN, tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:04:07Z on network e59f1a37-9713-45f0-9ce4-adafcc25b854#033[00m Dec 2 05:04:09 localhost dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 1 addresses Dec 2 05:04:09 localhost podman[309806]: 2025-12-02 10:04:09.575860509 +0000 UTC m=+0.061962016 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:04:09 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host Dec 2 05:04:09 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts Dec 2 05:04:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:09.893 262347 INFO neutron.agent.dhcp.agent [None req-8ba79ff6-fe6c-4423-94a1-6d28de18d525 - - - - - -] DHCP configuration for ports {'995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1'} is completed#033[00m Dec 2 05:04:10 localhost nova_compute[281045]: 2025-12-02 10:04:10.453 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:10 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:10.811 2 INFO neutron.agent.securitygroups_rpc [None req-4cc1fa1d-9a41-40fb-9e7e-ba331f6b18b7 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:04:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:10.896 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:09Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ffcaba02-6808-4409-8458-941ca0af2e66, ip_allocation=immediate, mac_address=fa:16:3e:a7:75:fd, name=tempest-subport-1664568330, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:02Z, description=, dns_domain=, id=c40d86e4-7101-443b-abce-328f7d1ea40e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1016568838, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=546, status=ACTIVE, subnets=['77a7f10b-646e-4333-96b4-7957dbd5d33c'], tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:04:05Z, vlan_transparent=None, network_id=c40d86e4-7101-443b-abce-328f7d1ea40e, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['576d6513-029b-4880-bb0b-58094b586b90'], standard_attr_id=584, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:04:10Z on network c40d86e4-7101-443b-abce-328f7d1ea40e#033[00m Dec 2 05:04:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 61 op/s Dec 2 05:04:11 localhost dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 1 addresses Dec 2 05:04:11 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host Dec 2 05:04:11 localhost podman[309845]: 2025-12-02 10:04:11.112715702 +0000 UTC m=+0.062833663 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:11 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts Dec 2 05:04:11 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:11.357 262347 INFO neutron.agent.dhcp.agent [None req-00d6282d-693f-4855-b5c3-b8421787d7d3 - - - - - -] DHCP configuration for ports {'ffcaba02-6808-4409-8458-941ca0af2e66'} is completed#033[00m Dec 2 05:04:11 localhost nova_compute[281045]: 2025-12-02 10:04:11.425 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:12 localhost nova_compute[281045]: 2025-12-02 10:04:12.049 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:12 localhost openstack_network_exporter[241816]: ERROR 10:04:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:04:12 localhost openstack_network_exporter[241816]: ERROR 10:04:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:12 localhost openstack_network_exporter[241816]: ERROR 10:04:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:12 localhost openstack_network_exporter[241816]: ERROR 10:04:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:04:12 localhost openstack_network_exporter[241816]: Dec 2 05:04:12 localhost openstack_network_exporter[241816]: ERROR 10:04:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:04:12 localhost openstack_network_exporter[241816]: Dec 2 05:04:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 152 op/s Dec 2 05:04:13 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:04:13 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:13 localhost podman[309884]: 2025-12-02 10:04:13.745535447 +0000 UTC m=+0.046865302 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:04:13 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e107 e107: 6 total, 6 up, 6 in Dec 2 05:04:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:13.904 262347 INFO neutron.agent.dhcp.agent [None req-72a6fe85-35b8-438d-a5cc-737c0f2a4004 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:13Z, description=, device_id=021f5268-3dcb-4f99-bfbf-465820cbeab2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8137ce5b-74f9-4747-998d-7d813193bde0, ip_allocation=immediate, mac_address=fa:16:3e:5f:02:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=590, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:13Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:14 localhost nova_compute[281045]: 2025-12-02 10:04:14.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:14 localhost nova_compute[281045]: 2025-12-02 10:04:14.162 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:14 localhost nova_compute[281045]: 2025-12-02 10:04:14.163 281049 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:04:14 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:04:14 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:14 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:14 localhost podman[309920]: 2025-12-02 10:04:14.21941362 +0000 UTC m=+0.070553320 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:04:14 localhost nova_compute[281045]: 2025-12-02 10:04:14.701 281049 DEBUG nova.compute.manager [None req-fc31f5e1-6ae9-48c9-a927-c618e71720af - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 160 op/s Dec 2 05:04:15 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:15.159 262347 INFO neutron.agent.dhcp.agent [None req-11537578-730e-445c-98e7-4f501f9f06bf - - - - - -] DHCP configuration for ports {'8137ce5b-74f9-4747-998d-7d813193bde0'} is completed#033[00m Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:04:15 localhost nova_compute[281045]: 2025-12-02 10:04:15.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:04:16 localhost podman[309942]: 2025-12-02 10:04:16.137340291 +0000 UTC m=+0.138454840 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:04:16 localhost podman[309942]: 2025-12-02 10:04:16.146654237 +0000 UTC m=+0.147768796 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:04:16 localhost podman[309943]: 2025-12-02 10:04:16.105091248 +0000 UTC m=+0.105392302 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:04:16 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:04:16 localhost podman[309943]: 2025-12-02 10:04:16.190264997 +0000 UTC m=+0.190566061 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 05:04:16 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:04:16 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:16.388 2 INFO neutron.agent.securitygroups_rpc [req-bec2fcab-0b29-48c5-8c73-7c95715690aa req-3ce61e55-77a0-41a7-a01c-658bb353c505 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']#033[00m Dec 2 05:04:16 localhost nova_compute[281045]: 2025-12-02 10:04:16.448 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:16 localhost nova_compute[281045]: 2025-12-02 10:04:16.743 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 147 op/s Dec 2 05:04:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:17 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:17.485 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:16Z, description=, device_id=804ee8e0-ce25-466a-ae8c-2cc899f8a9bc, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f65c1a0f-6b44-40ae-872d-dbeedbe50a5f, ip_allocation=immediate, mac_address=fa:16:3e:fa:3f:67, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=598, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:16Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:17 localhost podman[309999]: 2025-12-02 10:04:17.705996742 +0000 UTC m=+0.058494050 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:17 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 8 addresses Dec 2 05:04:17 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:17 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:17 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:17.923 262347 INFO neutron.agent.dhcp.agent [None req-876ef987-9ad9-4826-92aa-06d490871d4a - - - - - -] DHCP configuration for ports {'f65c1a0f-6b44-40ae-872d-dbeedbe50a5f'} is completed#033[00m Dec 2 05:04:18 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:18.296 2 INFO neutron.agent.securitygroups_rpc [req-3542c6d6-3e9a-4403-b3b7-62c55b0a2440 req-a1b9621e-b7b6-4f72-a92d-ded5fdb895c8 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.738 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.738 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.754 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.819 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.820 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.824 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.824 281049 INFO nova.compute.claims [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Claim successful on node np0005541914.localdomain#033[00m Dec 2 05:04:18 localhost nova_compute[281045]: 2025-12-02 10:04:18.947 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s Dec 2 05:04:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:04:19 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:04:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:04:19 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:04:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:04:19 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:04:19 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:04:19 localhost ceph-mgr[287188]: [progress INFO root] Completed event a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:04:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:04:19 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:04:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:19 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/675123350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.350 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.356 281049 DEBUG nova.compute.provider_tree [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.382 281049 DEBUG nova.scheduler.client.report [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.406 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.407 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.465 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.465 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.484 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.503 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.657 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.659 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.660 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating image(s)#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.696 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.732 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.764 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.770 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.844 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.845 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.846 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.846 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.873 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.878 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 82e23ec3-1d57-4166-9ba0-839ded943a78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.977 281049 WARNING oslo_policy.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.978 281049 WARNING oslo_policy.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 2 05:04:19 localhost nova_compute[281045]: 2025-12-02 10:04:19.982 281049 DEBUG nova.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec20a6cceee246d6b46878df263d30a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Dec 2 05:04:20 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:04:20 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.458 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.572 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 82e23ec3-1d57-4166-9ba0-839ded943a78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.661 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] resizing rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 2 05:04:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 e108: 6 total, 6 up, 6 in Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.817 281049 DEBUG nova.objects.instance [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lazy-loading 'migration_context' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.833 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.834 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Ensure instance console log exists: /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:20 localhost nova_compute[281045]: 2025-12-02 10:04:20.928 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 137 op/s Dec 2 05:04:21 localhost nova_compute[281045]: 2025-12-02 10:04:21.451 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:22 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:04:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:04:22 localhost nova_compute[281045]: 2025-12-02 10:04:22.189 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 193 op/s Dec 2 05:04:22 localhost nova_compute[281045]: 2025-12-02 10:04:22.975 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Successfully updated port: 54433c73-7e5c-481c-b64c-19e9cfd6e56f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Dec 2 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:04:22 localhost nova_compute[281045]: 2025-12-02 10:04:22.992 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:22 localhost nova_compute[281045]: 2025-12-02 10:04:22.993 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:22 localhost nova_compute[281045]: 2025-12-02 10:04:22.993 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:23 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:04:23 localhost systemd[1]: tmp-crun.JIeEY4.mount: Deactivated successfully. Dec 2 05:04:23 localhost nova_compute[281045]: 2025-12-02 10:04:23.094 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:23 localhost podman[310295]: 2025-12-02 10:04:23.094618455 +0000 UTC m=+0.101471001 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:04:23 localhost podman[310295]: 2025-12-02 10:04:23.134028317 +0000 UTC m=+0.140880883 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 2 05:04:23 localhost nova_compute[281045]: 2025-12-02 10:04:23.135 281049 DEBUG nova.compute.manager [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:23 localhost nova_compute[281045]: 2025-12-02 10:04:23.136 281049 DEBUG nova.compute.manager [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing instance network info cache due to event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:04:23 localhost nova_compute[281045]: 2025-12-02 10:04:23.136 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:23 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:04:23 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:23.966 2 INFO neutron.agent.securitygroups_rpc [req-65998e7d-c26a-45a5-8676-fd86a74e40b3 req-1863187d-62f6-4dd8-8a63-a2eeaa9837d3 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['dfa589a5-e6b3-419a-9bd7-e5b7ecfd8cd6']#033[00m Dec 2 05:04:24 localhost podman[310330]: 2025-12-02 10:04:24.072339262 +0000 UTC m=+0.059511361 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:24 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:04:24 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:24 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.331 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.353 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.382 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.383 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance network_info: |[{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.384 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.384 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.389 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start _get_guest_xml network_info=[{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.397 281049 WARNING nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.401 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.402 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.404 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.405 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.405 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.406 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.407 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.407 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.409 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.409 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.410 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.410 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.411 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.416 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:04:24 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2801301008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.868 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.906 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:24 localhost nova_compute[281045]: 2025-12-02 10:04:24.912 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.268 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updated VIF entry in instance network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.268 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:04:25 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2351640434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.402 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.404 281049 DEBUG nova.virt.libvirt.vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:04:19Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.405 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.406 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.409 281049 DEBUG nova.objects.instance [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lazy-loading 'pci_devices' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:25 localhost nova_compute[281045]: 2025-12-02 10:04:25.463 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.470 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] End _get_guest_xml xml= Dec 2 05:04:26 localhost nova_compute[281045]: 82e23ec3-1d57-4166-9ba0-839ded943a78 Dec 2 05:04:26 localhost nova_compute[281045]: instance-00000008 Dec 2 05:04:26 localhost nova_compute[281045]: 131072 Dec 2 05:04:26 localhost nova_compute[281045]: 1 Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: tempest-LiveMigrationTest-server-39688497 Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:24 Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: 128 Dec 2 05:04:26 localhost nova_compute[281045]: 1 Dec 2 05:04:26 localhost nova_compute[281045]: 0 Dec 2 05:04:26 localhost nova_compute[281045]: 0 Dec 2 05:04:26 localhost nova_compute[281045]: 1 Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: tempest-LiveMigrationTest-1345186206-project-member Dec 2 05:04:26 localhost nova_compute[281045]: tempest-LiveMigrationTest-1345186206 Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: RDO Dec 2 05:04:26 localhost nova_compute[281045]: OpenStack Compute Dec 2 05:04:26 localhost nova_compute[281045]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:04:26 localhost nova_compute[281045]: 82e23ec3-1d57-4166-9ba0-839ded943a78 Dec 2 05:04:26 localhost nova_compute[281045]: 82e23ec3-1d57-4166-9ba0-839ded943a78 Dec 2 05:04:26 localhost nova_compute[281045]: Virtual Machine Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: hvm Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: /dev/urandom Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: Dec 2 05:04:26 localhost nova_compute[281045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.471 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Preparing to wait for external event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.474 281049 DEBUG nova.virt.libvirt.vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:04:19Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.474 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.475 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.476 281049 DEBUG os_vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.476 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.477 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.478 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.482 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.484 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54433c73-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.485 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54433c73-7e, col_values=(('external_ids', {'iface-id': '54433c73-7e5c-481c-b64c-19e9cfd6e56f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:b6:1c', 'vm-uuid': '82e23ec3-1d57-4166-9ba0-839ded943a78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.496 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.504 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.506 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.508 281049 INFO os_vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No VIF found with MAC fa:16:3e:bb:b6:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.567 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Using config drive#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.598 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.729 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating config drive at /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.735 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5sfi6fb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.865 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5sfi6fb" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.907 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:26 localhost nova_compute[281045]: 2025-12-02 10:04:26.911 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.117 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.119 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deleting local config drive /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config because it was imported into RBD.#033[00m Dec 2 05:04:27 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:27.148 2 INFO neutron.agent.securitygroups_rpc [req-6d0b23d6-658e-4a79-96cf-b8ca52a56a83 req-dc334455-9197-4ae2-b241-5b724098ced8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['aadc9cbe-01f3-422d-afff-735004537d1d']#033[00m Dec 2 05:04:27 localhost kernel: device tap54433c73-7e entered promiscuous mode Dec 2 05:04:27 localhost NetworkManager[5967]: [1764669867.1729] manager: (tap54433c73-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/21) Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.173 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:27 localhost ovn_controller[153778]: 2025-12-02T10:04:27Z|00077|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this chassis. Dec 2 05:04:27 localhost ovn_controller[153778]: 2025-12-02T10:04:27Z|00078|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:27 localhost ovn_controller[153778]: 2025-12-02T10:04:27Z|00079|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this chassis. Dec 2 05:04:27 localhost ovn_controller[153778]: 2025-12-02T10:04:27Z|00080|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43 Dec 2 05:04:27 localhost systemd-udevd[310481]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:27 localhost ovn_controller[153778]: 2025-12-02T10:04:27Z|00081|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f ovn-installed in OVS Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.190 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:27 localhost NetworkManager[5967]: [1764669867.1981] device (tap54433c73-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:04:27 localhost NetworkManager[5967]: [1764669867.1995] device (tap54433c73-7e): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:04:27 localhost systemd-machined[202765]: New machine qemu-4-instance-00000008. Dec 2 05:04:27 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000008. Dec 2 05:04:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.549 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:27 localhost nova_compute[281045]: 2025-12-02 10:04:27.550 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Started (Lifecycle Event)#033[00m Dec 2 05:04:28 localhost ovn_controller[153778]: 2025-12-02T10:04:28Z|00082|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f up in Southbound Dec 2 05:04:28 localhost ovn_controller[153778]: 2025-12-02T10:04:28Z|00083|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 up in Southbound Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.329 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.331 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.332 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e bound to our chassis#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.335 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.335 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c40d86e4-7101-443b-abce-328f7d1ea40e#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.353 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[34e43b8a-a301-468f-856c-b4dc67888086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.354 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc40d86e4-71 in ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.356 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc40d86e4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.356 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f99a6360-4c2d-4bee-9b4f-a85f8dac6e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.357 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1535f9b5-72a7-4a6f-a550-72ab0844dcf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.368 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[129385b2-0cbd-4c53-99b1-5f4ed45eaaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.374 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.381 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.381 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Paused (Lifecycle Event)#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.381 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cc159051-91c0-4d79-8e5f-01fecb59eb5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.398 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.403 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.406 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[11859cec-4356-455e-ab03-32a2e6675113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.413 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db17ba87-e5b3-49f3-91da-4f2be962e445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost NetworkManager[5967]: [1764669868.4140] manager: (tapc40d86e4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.419 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.439 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[2d71cbaf-07fa-4e6c-8f64-c279a66bfddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.442 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7c7530-586a-47c1-84d9-cda5acd17c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-71: link becomes ready Dec 2 05:04:28 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-70: link becomes ready Dec 2 05:04:28 localhost NetworkManager[5967]: [1764669868.4602] device (tapc40d86e4-70): carrier: link connected Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.466 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[5802b722-bc38-4723-8d17-0f5a8480c462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.482 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d86255-fc0c-4ec4-8398-2c7d824b6354]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203262, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310560, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.497 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[55930abe-45c3-434c-a4a4-3d3efcf0fadb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:457f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1203262, 'tstamp': 1203262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310561, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.513 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2d311e9c-6775-4eec-bbeb-d038b0f94b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203262, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.541 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[289b3590-6db4-4164-9945-6c36d02feb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.594 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b133aec6-362d-4b51-8c98-78584d210213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.596 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.597 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.597 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc40d86e4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.645 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:28 localhost kernel: device tapc40d86e4-70 entered promiscuous mode Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.648 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.650 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc40d86e4-70, col_values=(('external_ids', {'iface-id': '60398627-924e-4353-b9ee-b86c24b6fc87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:28 localhost ovn_controller[153778]: 2025-12-02T10:04:28Z|00084|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.652 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:28 localhost nova_compute[281045]: 2025-12-02 10:04:28.663 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.664 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.665 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9d51e4-e42c-4c2c-998c-a7a75f12b091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.666 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: global Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-c40d86e4-7101-443b-abce-328f7d1ea40e Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: user root Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: group root Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: log global Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID c40d86e4-7101-443b-abce-328f7d1ea40e Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:04:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:28.666 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'env', 'PROCESS_TAG=haproxy-c40d86e4-7101-443b-abce-328f7d1ea40e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c40d86e4-7101-443b-abce-328f7d1ea40e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:04:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 77 op/s Dec 2 05:04:29 localhost podman[310595]: Dec 2 05:04:29 localhost podman[310595]: 2025-12-02 10:04:29.064002238 +0000 UTC m=+0.074864483 container create 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:29 localhost systemd[1]: Started libpod-conmon-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope. Dec 2 05:04:29 localhost systemd[1]: Started libcrun container. Dec 2 05:04:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06f5d13e82d475898b43f8dfdffe45494dd6b8060a149bf598427f2a15c274/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:29 localhost podman[310595]: 2025-12-02 10:04:29.028214958 +0000 UTC m=+0.039077163 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:04:29 localhost podman[310595]: 2025-12-02 10:04:29.129305056 +0000 UTC m=+0.140167271 container init 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:04:29 localhost podman[310595]: 2025-12-02 10:04:29.14112514 +0000 UTC m=+0.151987345 container start 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:29 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE] (310613) : New worker (310615) forked Dec 2 05:04:29 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE] (310613) : Loading success. Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.212 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.216 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.227 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a4abf6ba-8307-4911-a2a2-d22559ef58f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.229 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13bbad22-a1 in ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.231 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13bbad22-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.231 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db4fd512-0df2-4511-b205-2ce764eb0b94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.232 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[251a3dd1-11a3-45f9-9d98-b25a7d0355f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.243 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[52ad341a-e7a1-483f-8d71-994cd2ded410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.258 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0469064d-3016-48be-95b9-d1baf1653b86]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.289 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[69348485-606f-4752-8f11-933bd427b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost NetworkManager[5967]: [1764669869.2981] manager: (tap13bbad22-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.297 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d695257d-6e64-4db8-b8f5-fb063d30639c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost systemd-udevd[310551]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.336 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[8edf0c21-e2b2-4f40-8153-86a87f97e399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.339 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[e57adc1d-e6cc-45ae-ab66-eec140807700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap13bbad22-a0: link becomes ready Dec 2 05:04:29 localhost NetworkManager[5967]: [1764669869.3678] device (tap13bbad22-a0): carrier: link connected Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.373 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[9ceb333a-854d-4c23-891e-3fcd31b05796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.392 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2905c7ba-8aea-4c8b-8834-2d057d976c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203353, 'reachable_time': 32975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310634, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.409 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[de33862f-e9c2-43ab-8e0b-0fcba5709ed4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:4317'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1203353, 'tstamp': 1203353}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310635, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.425 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[80bff650-6974-4189-b03e-45806d0f70b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203353, 'reachable_time': 32975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310636, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.455 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[34e75e57-c6d8-4681-8dd7-a853b7ccfa3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.517 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1eac4cfe-b478-48c1-b2ea-49b03dcb6c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.519 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.521 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.522 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bbad22-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:29 localhost nova_compute[281045]: 2025-12-02 10:04:29.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:29 localhost kernel: device tap13bbad22-a0 entered promiscuous mode Dec 2 05:04:29 localhost nova_compute[281045]: 2025-12-02 10:04:29.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.532 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13bbad22-a0, col_values=(('external_ids', {'iface-id': '202be55f-4a2f-4e8a-884e-d4a72a4d525d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:29 localhost ovn_controller[153778]: 2025-12-02T10:04:29Z|00085|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:29 localhost nova_compute[281045]: 2025-12-02 10:04:29.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:29 localhost nova_compute[281045]: 2025-12-02 10:04:29.549 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.551 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.553 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[abb0cb8c-5fee-4fa6-a004-88ae43ec6095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.553 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: global Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-13bbad22-ab61-4b1f-849e-c651aa8f3297 Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: user root Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: group root Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: log global Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID 13bbad22-ab61-4b1f-849e-c651aa8f3297 Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:04:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:29.555 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'env', 'PROCESS_TAG=haproxy-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13bbad22-ab61-4b1f-849e-c651aa8f3297.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:04:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:29.591 2 INFO neutron.agent.securitygroups_rpc [req-1c594721-186d-4097-a94c-c620e0979c63 req-4b6914f0-ee8c-4772-ac7a-a3075974ee64 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['41f7c9c8-7668-4604-9cee-64c2ce6fa2c0']#033[00m Dec 2 05:04:29 localhost podman[310668]: Dec 2 05:04:30 localhost podman[310668]: 2025-12-02 10:04:30.009910798 +0000 UTC m=+0.101981978 container create ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:30 localhost systemd[1]: Started libpod-conmon-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope. Dec 2 05:04:30 localhost podman[310668]: 2025-12-02 10:04:29.961156868 +0000 UTC m=+0.053228088 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:04:30 localhost systemd[1]: Started libcrun container. Dec 2 05:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2da036c8c12fc8adba381824ea5735259dd4f5bbe0a18fc50f819cf9bed9c07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:30 localhost podman[310668]: 2025-12-02 10:04:30.085960796 +0000 UTC m=+0.178031946 container init ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:04:30 localhost podman[310668]: 2025-12-02 10:04:30.095478609 +0000 UTC m=+0.187549769 container start ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:04:30 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE] (310686) : New worker (310688) forked Dec 2 05:04:30 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE] (310686) : Loading success. Dec 2 05:04:30 localhost dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 0 addresses Dec 2 05:04:30 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host Dec 2 05:04:30 localhost podman[310712]: 2025-12-02 10:04:30.310695527 +0000 UTC m=+0.060519342 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:30 localhost dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts Dec 2 05:04:30 localhost nova_compute[281045]: 2025-12-02 10:04:30.493 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:30 localhost kernel: device tap7466a138-c4 left promiscuous mode Dec 2 05:04:30 localhost ovn_controller[153778]: 2025-12-02T10:04:30Z|00086|binding|INFO|Releasing lport 7466a138-c45f-458b-a865-8c5d3b978b39 from this chassis (sb_readonly=0) Dec 2 05:04:30 localhost ovn_controller[153778]: 2025-12-02T10:04:30Z|00087|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 down in Southbound Dec 2 05:04:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:30.502 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1db4f455ea047e3b37458f6d2c5e699', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=507aa10c-3500-464e-ac80-7fecb3c41257, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7466a138-c45f-458b-a865-8c5d3b978b39) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:30.504 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 7466a138-c45f-458b-a865-8c5d3b978b39 in datapath e59f1a37-9713-45f0-9ce4-adafcc25b854 unbound from our chassis#033[00m Dec 2 05:04:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:30.508 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e59f1a37-9713-45f0-9ce4-adafcc25b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:30.509 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[66b4f4ab-6a27-4b7d-9562-61b38d9e5c26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:30 localhost nova_compute[281045]: 2025-12-02 10:04:30.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 76 op/s Dec 2 05:04:31 localhost nova_compute[281045]: 2025-12-02 10:04:31.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:31 localhost nova_compute[281045]: 2025-12-02 10:04:31.533 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:32 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:32.190 2 INFO neutron.agent.securitygroups_rpc [req-10b28dbb-d460-47e0-a99a-7ab94b16b5dd req-5be6a150-24be-4b75-af16-d1e63344c43d 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['20cbc49d-f7c3-4e2e-87e6-586884a8dc4b']#033[00m Dec 2 05:04:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:32 localhost podman[310752]: 2025-12-02 10:04:32.785043569 +0000 UTC m=+0.061665577 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:32 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:04:32 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:32 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:32 localhost ovn_controller[153778]: 2025-12-02T10:04:32Z|00088|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:32 localhost ovn_controller[153778]: 2025-12-02T10:04:32Z|00089|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:32 localhost nova_compute[281045]: 2025-12-02 10:04:32.898 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:32 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s Dec 2 05:04:33 localhost podman[310792]: 2025-12-02 10:04:33.315831473 +0000 UTC m=+0.067049973 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:04:33 localhost dnsmasq[309453]: exiting on receipt of SIGTERM Dec 2 05:04:33 localhost systemd[1]: libpod-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope: Deactivated successfully. Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.329 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:33.329 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:33.332 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:04:33 localhost podman[310804]: 2025-12-02 10:04:33.403326393 +0000 UTC m=+0.067053983 container died eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:04:33 localhost systemd[1]: tmp-crun.SxcIUA.mount: Deactivated successfully. Dec 2 05:04:33 localhost podman[310804]: 2025-12-02 10:04:33.443279822 +0000 UTC m=+0.107007352 container cleanup eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:33 localhost systemd[1]: libpod-conmon-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope: Deactivated successfully. Dec 2 05:04:33 localhost podman[310806]: 2025-12-02 10:04:33.488906535 +0000 UTC m=+0.143499454 container remove eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:04:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:33.516 262347 INFO neutron.agent.dhcp.agent [None req-917017cd-852b-4ae7-8660-9d6b1c3da2b8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.526 281049 DEBUG nova.compute.manager [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.527 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.528 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.528 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.529 281049 DEBUG nova.compute.manager [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Processing event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.530 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.535 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.536 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.540 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.547 281049 INFO nova.virt.libvirt.driver [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance spawned successfully.#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.548 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.556 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.567 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.579 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.580 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.580 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.581 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.582 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.583 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.592 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:04:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:33.608 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:33 localhost podman[239757]: time="2025-12-02T10:04:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.638 281049 INFO nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 13.98 seconds to spawn the instance on the hypervisor.#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.640 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:33 localhost podman[239757]: @ - - [02/Dec/2025:10:04:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160942 "" "Go-http-client/1.1" Dec 2 05:04:33 localhost podman[239757]: @ - - [02/Dec/2025:10:04:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20630 "" "Go-http-client/1.1" Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.725 281049 INFO nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 14.93 seconds to build instance.#033[00m Dec 2 05:04:33 localhost nova_compute[281045]: 2025-12-02 10:04:33.743 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 15.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:33 localhost systemd[1]: var-lib-containers-storage-overlay-b0d093117dd79caf19187febcd7ccef397c254025e14d4a134627538b5ac62e5-merged.mount: Deactivated successfully. Dec 2 05:04:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:33 localhost systemd[1]: run-netns-qdhcp\x2de59f1a37\x2d9713\x2d45f0\x2d9ce4\x2dadafcc25b854.mount: Deactivated successfully. Dec 2 05:04:33 localhost systemd[1]: tmp-crun.8QvC4R.mount: Deactivated successfully. Dec 2 05:04:33 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses Dec 2 05:04:33 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:33 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:33 localhost podman[310850]: 2025-12-02 10:04:33.830278943 +0000 UTC m=+0.074549793 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:04:34 localhost ovn_controller[153778]: 2025-12-02T10:04:34Z|00090|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:34 localhost ovn_controller[153778]: 2025-12-02T10:04:34Z|00091|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:34 localhost nova_compute[281045]: 2025-12-02 10:04:34.099 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:34 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.684 281049 DEBUG nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.685 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.685 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.686 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.686 281049 DEBUG nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.686 281049 WARNING nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state None.#033[00m Dec 2 05:04:35 localhost nova_compute[281045]: 2025-12-02 10:04:35.903 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:36 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:36.335 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:36.406 2 INFO neutron.agent.securitygroups_rpc [req-7ec4157a-3973-4fe9-90a5-6b7e95187ed9 req-9adda286-3e5c-4f67-99d9-e6d6658a3dd8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:36 localhost nova_compute[281045]: 2025-12-02 10:04:36.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:36.790 2 INFO neutron.agent.securitygroups_rpc [req-41dd90c2-f92d-4e4d-a9a2-5512726d06ed req-eda4abe0-dc4d-48d0-a211-5598e3a12357 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:04:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:04:36 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s Dec 2 05:04:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:37 localhost nova_compute[281045]: 2025-12-02 10:04:37.494 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Check if temp file /var/lib/nova/instances/tmpvcgqfy3k exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Dec 2 05:04:37 localhost nova_compute[281045]: 2025-12-02 10:04:37.494 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Dec 2 05:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:04:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:04:38 localhost systemd[1]: tmp-crun.TDB7oZ.mount: Deactivated successfully. Dec 2 05:04:38 localhost podman[310871]: 2025-12-02 10:04:38.108318733 +0000 UTC m=+0.104962718 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:04:38 localhost podman[310872]: 2025-12-02 10:04:38.117718913 +0000 UTC m=+0.110200591 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:04:38 localhost podman[310871]: 2025-12-02 10:04:38.123938064 +0000 UTC m=+0.120582069 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:04:38 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:04:38 localhost podman[310870]: 2025-12-02 10:04:38.1855884 +0000 UTC m=+0.185360441 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:04:38 localhost podman[310872]: 2025-12-02 10:04:38.190901843 +0000 UTC m=+0.183383511 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:04:38 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:04:38 localhost podman[310870]: 2025-12-02 10:04:38.200686044 +0000 UTC m=+0.200458085 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:04:38 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:04:38 localhost podman[310869]: 2025-12-02 10:04:38.244031577 +0000 UTC m=+0.243350265 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:04:38 localhost podman[310869]: 2025-12-02 10:04:38.252940271 +0000 UTC m=+0.252258949 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:38 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:04:38 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:38.318 2 INFO neutron.agent.securitygroups_rpc [req-8e540b7a-da71-4acd-ab56-fd3bce480c0a req-799fda44-ad0c-42d1-806d-41b3bc34424c 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:38 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s Dec 2 05:04:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.004 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:39Z, description=, device_id=279e244d-14ba-4911-a425-d38d92768269, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2a5f6f54-623d-4412-84a8-0e113f2d185f, ip_allocation=immediate, mac_address=fa:16:3e:a8:b0:59, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=692, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:40 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:04:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:40 localhost podman[310966]: 2025-12-02 10:04:40.216088413 +0000 UTC m=+0.055119416 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.515 262347 INFO neutron.agent.dhcp.agent [None req-da977d56-85d3-4320-851d-66b5e47d3862 - - - - - -] DHCP configuration for ports {'2a5f6f54-623d-4412-84a8-0e113f2d185f'} is completed#033[00m Dec 2 05:04:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.706 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:40Z, description=, device_id=11e16c5e-46e1-4a00-8cde-eb7c634beb6e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1f543bfe-6c57-4a47-ae94-6dbd02322d8e, ip_allocation=immediate, mac_address=fa:16:3e:50:3c:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=699, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:40Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:40 localhost podman[311005]: 2025-12-02 10:04:40.931808273 +0000 UTC m=+0.060000976 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:40 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:04:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:40 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s Dec 2 05:04:41 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:41.292 262347 INFO neutron.agent.dhcp.agent [None req-70668fb7-61a1-4972-b9d9-6eb4a52faef0 - - - - - -] DHCP configuration for ports {'1f543bfe-6c57-4a47-ae94-6dbd02322d8e'} is completed#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.538 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.538 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.541 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:41 localhost nova_compute[281045]: 2025-12-02 10:04:41.762 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:42 localhost openstack_network_exporter[241816]: ERROR 10:04:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:42 localhost openstack_network_exporter[241816]: ERROR 10:04:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:42 localhost openstack_network_exporter[241816]: ERROR 10:04:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:04:42 localhost openstack_network_exporter[241816]: Dec 2 05:04:42 localhost openstack_network_exporter[241816]: ERROR 10:04:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:04:42 localhost openstack_network_exporter[241816]: Dec 2 05:04:42 localhost openstack_network_exporter[241816]: ERROR 10:04:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:04:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.733 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.733 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.734 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.734 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.735 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:42 localhost nova_compute[281045]: 2025-12-02 10:04:42.735 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:04:42 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s Dec 2 05:04:43 localhost nova_compute[281045]: 2025-12-02 10:04:43.482 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.910 281049 DEBUG nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.911 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.911 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.912 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.912 281049 DEBUG nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.913 281049 WARNING nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.932 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 6.60 seconds for pre_live_migration on destination host np0005541913.localdomain.#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.932 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.971 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(f83e1b81-4647-4642-b7c4-b4f369bef051),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.975 281049 DEBUG nova.objects.instance [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.976 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Dec 2 05:04:44 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.979 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Dec 2 05:04:44 localhost nova_compute[281045]: 2025-12-02 10:04:44.979 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.065 281049 DEBUG nova.virt.libvirt.vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:04:33Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.066 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.067 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.068 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating guest XML with vif config: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: Dec 2 05:04:45 localhost nova_compute[281045]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.069 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.482 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.483 281049 INFO nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Dec 2 05:04:45 localhost nova_compute[281045]: 2025-12-02 10:04:45.721 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.226 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.227 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.730 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.731 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Paused (Lifecycle Event)#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.751 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.757 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.758 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.758 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.778 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Dec 2 05:04:46 localhost kernel: device tap54433c73-7e left promiscuous mode Dec 2 05:04:46 localhost NetworkManager[5967]: [1764669886.9750] device (tap54433c73-7e): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:04:46 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.983 281049 DEBUG nova.compute.manager [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00092|binding|INFO|Releasing lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f from this chassis (sb_readonly=0) Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00093|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f down in Southbound Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00094|binding|INFO|Releasing lport ffcaba02-6808-4409-8458-941ca0af2e66 from this chassis (sb_readonly=0) Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00095|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 down in Southbound Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00096|binding|INFO|Removing iface tap54433c73-7e ovn-installed in OVS Dec 2 05:04:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG nova.compute.manager [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing instance network info cache due to event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.985 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:04:46 localhost nova_compute[281045]: 2025-12-02 10:04:46.986 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:04:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:46.992 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00097|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:46 localhost ovn_controller[153778]: 2025-12-02T10:04:46Z|00098|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:46.994 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:46.995 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis#033[00m Dec 2 05:04:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:46.998 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:46.998 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.004 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[035bc1f9-32f3-4900-a3bc-42982e454188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.006 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace which is not needed anymore#033[00m Dec 2 05:04:47 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully. Dec 2 05:04:47 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 12.799s CPU time. Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.037 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost systemd-machined[202765]: Machine qemu-4-instance-00000008 terminated. Dec 2 05:04:47 localhost podman[311033]: 2025-12-02 10:04:47.077322013 +0000 UTC m=+0.074521283 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.077 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost podman[311033]: 2025-12-02 10:04:47.085332739 +0000 UTC m=+0.082532009 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:04:47 localhost journal[228953]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk: No such file or directory Dec 2 05:04:47 localhost journal[228953]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk: No such file or directory Dec 2 05:04:47 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:04:47 localhost kernel: device tap54433c73-7e entered promiscuous mode Dec 2 05:04:47 localhost NetworkManager[5967]: [1764669887.1065] manager: (tap54433c73-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/24) Dec 2 05:04:47 localhost kernel: device tap54433c73-7e left promiscuous mode Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.112 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00099|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this chassis. Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00100|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00101|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this chassis. Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00102|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43 Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00103|binding|INFO|Releasing lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f from this chassis (sb_readonly=0) Dec 2 05:04:47 localhost ovn_controller[153778]: 2025-12-02T10:04:47Z|00104|binding|INFO|Releasing lport ffcaba02-6808-4409-8458-941ca0af2e66 from this chassis (sb_readonly=0) Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.131 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.134 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.136 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.138 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.139 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.139 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.148 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.150 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:47 localhost podman[311035]: 2025-12-02 10:04:47.144652073 +0000 UTC m=+0.141257454 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9) Dec 2 05:04:47 localhost podman[311035]: 2025-12-02 10:04:47.234184037 +0000 UTC m=+0.230789438 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE] (310613) : haproxy version is 2.8.14-c23fe91 Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE] (310613) : path to executable is /usr/sbin/haproxy Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [WARNING] (310613) : Exiting Master process... Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [ALERT] (310613) : Current worker (310615) exited with code 143 (Terminated) Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [WARNING] (310613) : All workers exited. Exiting... (0) Dec 2 05:04:47 localhost systemd[1]: libpod-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope: Deactivated successfully. Dec 2 05:04:47 localhost podman[311103]: 2025-12-02 10:04:47.245299999 +0000 UTC m=+0.066058803 container died 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:47 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.260 281049 DEBUG nova.virt.libvirt.guest [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '82e23ec3-1d57-4166-9ba0-839ded943a78' (instance-00000008) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.261 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation has completed#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.262 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] _post_live_migration() is started..#033[00m Dec 2 05:04:47 localhost podman[311103]: 2025-12-02 10:04:47.351257907 +0000 UTC m=+0.172016661 container cleanup 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:04:47 localhost podman[311116]: 2025-12-02 10:04:47.365512686 +0000 UTC m=+0.119092404 container cleanup 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:47 localhost systemd[1]: libpod-conmon-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope: Deactivated successfully. Dec 2 05:04:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:47 localhost podman[311131]: 2025-12-02 10:04:47.445204256 +0000 UTC m=+0.075317267 container remove 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.449 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cafdccee-a8f9-4f0b-919b-e6f1fc2ccb58]: (4, ('Tue Dec 2 10:04:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68)\n2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68\nTue Dec 2 10:04:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68)\n2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.451 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[886530bc-1d2d-4ab5-86f9-1c252820e804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.452 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.455 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost kernel: device tapc40d86e4-70 left promiscuous mode Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.469 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.472 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca7714-d0fd-4123-810e-17f07fca4d98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.487 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[69c1a516-07ec-442e-b4a5-cceb63ccb022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.491 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd1ff6c-1608-444d-b806-840c0bf270f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.510 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ed936f86-4c46-4e3d-a0c9-f7e38f2a322c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203256, 'reachable_time': 17453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311152, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.513 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.513 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[e322d757-f502-48ac-a172-0cd7e4f54163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.514 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.517 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.518 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[463cadb3-1d07-415a-933c-33f1fa9bbe92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.519 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace which is not needed anymore#033[00m Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE] (310686) : haproxy version is 2.8.14-c23fe91 Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE] (310686) : path to executable is /usr/sbin/haproxy Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [WARNING] (310686) : Exiting Master process... Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [ALERT] (310686) : Current worker (310688) exited with code 143 (Terminated) Dec 2 05:04:47 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [WARNING] (310686) : All workers exited. Exiting... (0) Dec 2 05:04:47 localhost systemd[1]: libpod-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope: Deactivated successfully. Dec 2 05:04:47 localhost podman[311168]: 2025-12-02 10:04:47.713229189 +0000 UTC m=+0.078419963 container died ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:04:47 localhost podman[311168]: 2025-12-02 10:04:47.76300811 +0000 UTC m=+0.128198844 container cleanup ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:47 localhost podman[311180]: 2025-12-02 10:04:47.795691804 +0000 UTC m=+0.076112601 container cleanup ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:47 localhost systemd[1]: libpod-conmon-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope: Deactivated successfully. Dec 2 05:04:47 localhost podman[311198]: 2025-12-02 10:04:47.87326911 +0000 UTC m=+0.085556061 container remove ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.880 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f8afc288-f700-41d1-8fa1-c89db23447d1]: (4, ('Tue Dec 2 10:04:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe)\nace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe\nTue Dec 2 10:04:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe)\nace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.882 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[98c0f259-ab3e-4a6e-ae06-cfee06393ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.883 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.911 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost kernel: device tap13bbad22-a0 left promiscuous mode Dec 2 05:04:47 localhost nova_compute[281045]: 2025-12-02 10:04:47.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.926 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[fa71a966-9fbe-4266-bd57-680b0d8e629b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.938 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[82a113e4-18fa-46ed-a9f5-c6428fdfb65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.940 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd245de-c87c-41d2-86ec-8910f114a11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.959 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e0c83e-7314-4537-a8de-13b8adc496b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203345, 'reachable_time': 25885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311221, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.962 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.962 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[de576496-61f5-49a9-a292-566e1881259d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.963 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.968 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ad162d-b4f0-4d6a-9fad-0ec8fd5f19a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.969 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.973 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.973 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[86eef710-bf89-4046-98c9-ced84a70c56c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.974 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.977 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.978 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.978 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0015f13c-d332-4592-b6ce-f1695a9e4cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.979 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.982 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:04:47.983 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f86195f7-f153-4ff6-81d4-68a969f48314]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:48 localhost systemd[1]: tmp-crun.LFKO8B.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: var-lib-containers-storage-overlay-a2da036c8c12fc8adba381824ea5735259dd4f5bbe0a18fc50f819cf9bed9c07-merged.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: run-netns-ovnmeta\x2d13bbad22\x2dab61\x2d4b1f\x2d849e\x2dc651aa8f3297.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: var-lib-containers-storage-overlay-ba06f5d13e82d475898b43f8dfdffe45494dd6b8060a149bf598427f2a15c274-merged.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: run-netns-ovnmeta\x2dc40d86e4\x2d7101\x2d443b\x2dabce\x2d328f7d1ea40e.mount: Deactivated successfully. Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.722 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updated VIF entry in instance network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.724 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005541913.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.744 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:48.812 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:47Z, description=, device_id=3c297297-876e-43ee-83e5-1e1ff7b8f51c, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eb25b32f-4168-45b7-be29-c5d1e26399ec, ip_allocation=immediate, mac_address=fa:16:3e:e1:cd:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=723, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:48Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.827 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.827 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.828 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.828 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.829 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:48 localhost nova_compute[281045]: 2025-12-02 10:04:48.829 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:04:48 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s Dec 2 05:04:49 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 8 addresses Dec 2 05:04:49 localhost podman[311238]: 2025-12-02 10:04:49.035386148 +0000 UTC m=+0.065227737 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:49 localhost systemd[1]: tmp-crun.PkU9QX.mount: Deactivated successfully. Dec 2 05:04:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:04:49.275 262347 INFO neutron.agent.dhcp.agent [None req-1e32bc6b-15d3-4dfe-b4fd-25ae583ac136 - - - - - -] DHCP configuration for ports {'eb25b32f-4168-45b7-be29-c5d1e26399ec'} is completed#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.256 281049 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Activated binding for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f and host np0005541913.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.257 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.258 281049 DEBUG nova.virt.libvirt.vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:04:36Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.258 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.259 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.260 281049 DEBUG os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.261 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.262 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54433c73-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.299 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.303 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.306 281049 INFO os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.307 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.307 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.308 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.308 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.309 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deleting instance files /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.309 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deletion of /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del complete#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.728 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.979 281049 DEBUG nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.980 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:50 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 59 op/s Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.980 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.981 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.983 281049 DEBUG nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:50 localhost nova_compute[281045]: 2025-12-02 10:04:50.983 281049 WARNING nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.#033[00m Dec 2 05:04:51 localhost systemd[1]: tmp-crun.BCxZd7.mount: Deactivated successfully. Dec 2 05:04:51 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses Dec 2 05:04:51 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:51 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:51 localhost podman[311273]: 2025-12-02 10:04:51.434940031 +0000 UTC m=+0.076125793 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:51 localhost nova_compute[281045]: 2025-12-02 10:04:51.543 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:51 localhost nova_compute[281045]: 2025-12-02 10:04:51.942 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:52 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.031 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:53 localhost podman[311313]: 2025-12-02 10:04:53.183945357 +0000 UTC m=+0.064177524 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:04:53 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses Dec 2 05:04:53 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:53 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:04:53 localhost podman[311328]: 2025-12-02 10:04:53.31054385 +0000 UTC m=+0.095593690 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:53 localhost podman[311328]: 2025-12-02 10:04:53.32904745 +0000 UTC m=+0.114097300 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:04:53 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.368 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.519 281049 DEBUG nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.521 281049 WARNING nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.#033[00m Dec 2 05:04:53 localhost nova_compute[281045]: 2025-12-02 10:04:53.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.228 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.229 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.229 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.260 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.260 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.261 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.261 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.262 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:54.636 2 INFO neutron.agent.securitygroups_rpc [None req-5252ab83-90b7-4c17-ab41-150a0f430946 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:04:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:54 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3831071445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.698 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.908 281049 WARNING nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11558MB free_disk=41.70097732543945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:04:54.924 2 INFO neutron.agent.securitygroups_rpc [None req-a8a8282d-6793-4a84-80fc-24e3966f9a17 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.951 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Migration for instance 82e23ec3-1d57-4166-9ba0-839ded943a78 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Dec 2 05:04:54 localhost nova_compute[281045]: 2025-12-02 10:04:54.974 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Dec 2 05:04:54 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.025 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Migration f83e1b81-4647-4642-b7c4-b4f369bef051 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.025 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.026 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.070 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.300 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:55 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4098882359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.538 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.546 281049 DEBUG nova.compute.provider_tree [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.570 281049 DEBUG nova.scheduler.client.report [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.598 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.599 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.607 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migrating instance to np0005541913.localdomain finished successfully.#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.694 281049 INFO nova.scheduler.client.report [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Deleted allocation for migration f83e1b81-4647-4642-b7c4-b4f369bef051#033[00m Dec 2 05:04:55 localhost nova_compute[281045]: 2025-12-02 10:04:55.695 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Dec 2 05:04:56 localhost nova_compute[281045]: 2025-12-02 10:04:56.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:56 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s Dec 2 05:04:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:57 localhost nova_compute[281045]: 2025-12-02 10:04:57.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:58 localhost nova_compute[281045]: 2025-12-02 10:04:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:58 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 96 op/s Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.158 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.159 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.192 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.270 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.270 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.273 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.274 281049 INFO nova.compute.claims [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Claim successful on node np0005541914.localdomain#033[00m Dec 2 05:04:59 localhost podman[311414]: 2025-12-02 10:04:59.328985612 +0000 UTC m=+0.067355612 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:04:59 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses Dec 2 05:04:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:04:59 localhost systemd[1]: tmp-crun.yCKiyw.mount: Deactivated successfully. Dec 2 05:04:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.408 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:59 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:59 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1410163043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.858 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.864 281049 DEBUG nova.compute.provider_tree [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.894 281049 DEBUG nova.scheduler.client.report [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.917 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.919 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.967 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.968 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Dec 2 05:04:59 localhost nova_compute[281045]: 2025-12-02 10:04:59.982 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.012 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.024 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.071 281049 DEBUG nova.policy [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '955214da09cd44dba70e1a06eabc9023', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '50df25ee29424615807a458690cdf8d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.119 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.122 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.122 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating image(s)#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.155 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.182 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.208 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.211 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.281 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.282 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.284 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.284 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.319 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.323 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc abf8d33c-4e24-4d26-af41-b01c828c67e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.342 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:00 localhost neutron_sriov_agent[255428]: 2025-12-02 10:05:00.408 2 INFO neutron.agent.securitygroups_rpc [req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 req-4740c003-3af7-4933-8b00-851aa84e7e55 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.633 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Successfully created port: a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.889 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc abf8d33c-4e24-4d26-af41-b01c828c67e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:00 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 24 KiB/s wr, 36 op/s Dec 2 05:05:00 localhost nova_compute[281045]: 2025-12-02 10:05:00.991 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] resizing rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.152 281049 DEBUG nova.objects.instance [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'migration_context' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.171 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.171 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Ensure instance console log exists: /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.172 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.172 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.173 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.530 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:05:01 localhost neutron_sriov_agent[255428]: 2025-12-02 10:05:01.532 2 INFO neutron.agent.securitygroups_rpc [None req-9eec1e00-2947-423e-88e8-b2e4c78afea0 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.557 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.557 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.636 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.691 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Successfully updated port: a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.774 281049 DEBUG nova.compute.manager [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.775 281049 DEBUG nova.compute.manager [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing instance network info cache due to event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:05:01 localhost nova_compute[281045]: 2025-12-02 10:05:01.775 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:05:01 localhost dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 0 addresses Dec 2 05:05:01 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host Dec 2 05:05:01 localhost dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts Dec 2 05:05:01 localhost podman[311639]: 2025-12-02 10:05:01.802906822 +0000 UTC m=+0.057980284 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:01 localhost systemd[1]: tmp-crun.waA8Jh.mount: Deactivated successfully. Dec 2 05:05:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e109 e109: 6 total, 6 up, 6 in Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.109 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.125 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.125 281049 INFO nova.compute.manager [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.176 281049 DEBUG nova.compute.manager [None req-7c1fcdaa-7955-47f3-abcb-d07c22c61577 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.876 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.877 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.877 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.878 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:05:02 localhost nova_compute[281045]: 2025-12-02 10:05:02.878 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:02 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.111 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.128 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.129 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance network_info: |[{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.130 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.131 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.137 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start _get_guest_xml network_info=[{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.142 281049 WARNING nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.151 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.152 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.154 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.154 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.155 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.156 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.156 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.157 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.157 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.159 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.159 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.160 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.160 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.165 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:03 localhost systemd[1]: tmp-crun.7uO6rC.mount: Deactivated successfully. Dec 2 05:05:03 localhost dnsmasq[309707]: exiting on receipt of SIGTERM Dec 2 05:05:03 localhost podman[311698]: 2025-12-02 10:05:03.247774725 +0000 UTC m=+0.065071162 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:03 localhost systemd[1]: libpod-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope: Deactivated successfully. Dec 2 05:05:03 localhost podman[311711]: 2025-12-02 10:05:03.315772457 +0000 UTC m=+0.053901550 container died 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:03 localhost podman[311711]: 2025-12-02 10:05:03.347222603 +0000 UTC m=+0.085351616 container cleanup 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:03 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2077633090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:03 localhost systemd[1]: libpod-conmon-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope: Deactivated successfully. Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.366 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:03 localhost podman[311713]: 2025-12-02 10:05:03.402514514 +0000 UTC m=+0.135313763 container remove 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:05:03 localhost ovn_controller[153778]: 2025-12-02T10:05:03Z|00105|binding|INFO|Releasing lport de515592-061d-469f-83fb-52a8d86b335c from this chassis (sb_readonly=0) Dec 2 05:05:03 localhost ovn_controller[153778]: 2025-12-02T10:05:03Z|00106|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c down in Southbound Dec 2 05:05:03 localhost kernel: device tapde515592-06 left promiscuous mode Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.420 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.432 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=de515592-061d-469f-83fb-52a8d86b335c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.434 159483 INFO neutron.agent.ovn.metadata.agent [-] Port de515592-061d-469f-83fb-52a8d86b335c in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.437 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:03.438 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5d1c8e-7413-4fba-a469-3471e6e923ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.442 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.624 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.626 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11516MB free_disk=41.774757385253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.628 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:03 localhost podman[239757]: time="2025-12-02T10:05:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:05:03 localhost podman[239757]: @ - - [02/Dec/2025:10:05:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:05:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:05:03 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/402438692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:05:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:03.686 262347 INFO neutron.agent.dhcp.agent [None req-7315e879-b571-440a-8acf-f1bb6471da4b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.687 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:03 localhost podman[239757]: @ - - [02/Dec/2025:10:05:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19203 "" "Go-http-client/1.1" Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.716 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.720 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance abf8d33c-4e24-4d26-af41-b01c828c67e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:05:03 localhost nova_compute[281045]: 2025-12-02 10:05:03.787 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:05:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3823900154' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.174 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.178 281049 DEBUG nova.virt.libvirt.vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:05:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.179 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.180 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:05:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4000286707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.182 281049 DEBUG nova.objects.instance [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.199 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.205 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:05:04 localhost systemd[1]: var-lib-containers-storage-overlay-ffda5ed93ce183a48e515ae2d9c5dd554b9888bdc7086e24a8673d10ea36adfd-merged.mount: Deactivated successfully. Dec 2 05:05:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:04 localhost systemd[1]: run-netns-qdhcp\x2dc40d86e4\x2d7101\x2d443b\x2dabce\x2d328f7d1ea40e.mount: Deactivated successfully. Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.282 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updated VIF entry in instance network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.283 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:05:04 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:04.544 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.633 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] End _get_guest_xml xml= Dec 2 05:05:04 localhost nova_compute[281045]: abf8d33c-4e24-4d26-af41-b01c828c67e0 Dec 2 05:05:04 localhost nova_compute[281045]: instance-0000000a Dec 2 05:05:04 localhost nova_compute[281045]: 131072 Dec 2 05:05:04 localhost nova_compute[281045]: 1 Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: guest-instance-1 Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:03 Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: 128 Dec 2 05:05:04 localhost nova_compute[281045]: 1 Dec 2 05:05:04 localhost nova_compute[281045]: 0 Dec 2 05:05:04 localhost nova_compute[281045]: 0 Dec 2 05:05:04 localhost nova_compute[281045]: 1 Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: tempest-ServersV294TestFqdnHostnames-2112874438-project-member Dec 2 05:05:04 localhost nova_compute[281045]: tempest-ServersV294TestFqdnHostnames-2112874438 Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: RDO Dec 2 05:05:04 localhost nova_compute[281045]: OpenStack Compute Dec 2 05:05:04 localhost nova_compute[281045]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:05:04 localhost nova_compute[281045]: abf8d33c-4e24-4d26-af41-b01c828c67e0 Dec 2 05:05:04 localhost nova_compute[281045]: abf8d33c-4e24-4d26-af41-b01c828c67e0 Dec 2 05:05:04 localhost nova_compute[281045]: Virtual Machine Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: hvm Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: /dev/urandom Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: Dec 2 05:05:04 localhost nova_compute[281045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.635 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Preparing to wait for external event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.635 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.636 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.637 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.638 281049 DEBUG nova.virt.libvirt.vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:05:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.638 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.639 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.640 281049 DEBUG os_vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.643 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.646 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.653 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0a73e76-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.654 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0a73e76-68, col_values=(('external_ids', {'iface-id': 'a0a73e76-685f-4ba0-87b5-5dd27b54fab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:9d:c1', 'vm-uuid': 'abf8d33c-4e24-4d26-af41-b01c828c67e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.664 281049 INFO os_vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68')#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.699 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.706 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.707 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.770 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.771 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.772 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No VIF found with MAC fa:16:3e:16:9d:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.773 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Using config drive#033[00m Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.815 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:04 localhost systemd[1]: tmp-crun.YuNJHz.mount: Deactivated successfully. Dec 2 05:05:04 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:05:04 localhost podman[311860]: 2025-12-02 10:05:04.909850798 +0000 UTC m=+0.059593914 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:04 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:04 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:04 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:04.931 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:04 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s Dec 2 05:05:04 localhost nova_compute[281045]: 2025-12-02 10:05:04.997 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.002 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating config drive at /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.007 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6b54pamg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.140 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6b54pamg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.183 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.188 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.440 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.441 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deleting local config drive /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config because it was imported into RBD.#033[00m Dec 2 05:05:05 localhost kernel: device tapa0a73e76-68 entered promiscuous mode Dec 2 05:05:05 localhost NetworkManager[5967]: [1764669905.4893] manager: (tapa0a73e76-68): new Tun device (/org/freedesktop/NetworkManager/Devices/25) Dec 2 05:05:05 localhost systemd-udevd[311930]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:05 localhost ovn_controller[153778]: 2025-12-02T10:05:05Z|00107|binding|INFO|Claiming lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for this chassis. Dec 2 05:05:05 localhost ovn_controller[153778]: 2025-12-02T10:05:05Z|00108|binding|INFO|a0a73e76-685f-4ba0-87b5-5dd27b54fab4: Claiming fa:16:3e:16:9d:c1 10.100.0.12 Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.501 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9d:c1 10.100.0.12'], port_security=['fa:16:3e:16:9d:c1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'abf8d33c-4e24-4d26-af41-b01c828c67e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=a0a73e76-685f-4ba0-87b5-5dd27b54fab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.503 159483 INFO neutron.agent.ovn.metadata.agent [-] Port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 bound to our chassis#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.506 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45d02cf1-f511-4416-b7c1-b37c417f16f9#033[00m Dec 2 05:05:05 localhost NetworkManager[5967]: [1764669905.5065] device (tapa0a73e76-68): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:05:05 localhost NetworkManager[5967]: [1764669905.5071] device (tapa0a73e76-68): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:05:05 localhost ovn_controller[153778]: 2025-12-02T10:05:05Z|00109|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 ovn-installed in OVS Dec 2 05:05:05 localhost ovn_controller[153778]: 2025-12-02T10:05:05Z|00110|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 up in Southbound Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.516 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.516 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed4ca5d-c74d-468b-8794-1f1751d54102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.518 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45d02cf1-f1 in ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.520 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45d02cf1-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.520 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4e72ebfb-60dc-4c50-b481-6b8e6b9adf6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.523 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcabd1c-4570-45a0-b129-e73040a0711e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost systemd-machined[202765]: New machine qemu-5-instance-0000000a. Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.536 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[8d67b885-79fc-4fef-99ea-e23e27858457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost systemd[1]: Started Virtual Machine qemu-5-instance-0000000a. Dec 2 05:05:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:05:05.552 2 INFO neutron.agent.securitygroups_rpc [None req-7954669c-1491-4ccc-a463-0efe07ba8bc3 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.549 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc4ae7e-1ed4-40e6-ac20-8b406bca8500]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.580 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[e13f6324-3374-48e8-9392-bbe6fb955fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost systemd-udevd[311933]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:05 localhost NetworkManager[5967]: [1764669905.5871] manager: (tap45d02cf1-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/26) Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.585 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d5af2d69-87d9-461b-86f6-9e624ceea1ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.615 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1e896b-9e02-44a6-8f44-13ab8e75677d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.619 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9d23f5-dcb6-498f-b712-97a93ddcadb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost NetworkManager[5967]: [1764669905.6419] device (tap45d02cf1-f0): carrier: link connected Dec 2 05:05:05 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45d02cf1-f1: link becomes ready Dec 2 05:05:05 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45d02cf1-f0: link becomes ready Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.646 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf20856-263a-4d1c-a68f-e9e47605d56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.667 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e7732a74-70d7-47ae-abcd-266c69e50aac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d02cf1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:f8:d7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206980, 'reachable_time': 33855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311966, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.685 281049 DEBUG nova.compute.manager [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.686 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.685 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7da269-d5eb-44c9-85a4-b2b4936e3b00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:d7c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1206980, 'tstamp': 1206980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311982, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.687 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.687 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.688 281049 DEBUG nova.compute.manager [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Processing event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.706 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[63aef844-019b-4bf5-99ab-c653d8dcdeb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d02cf1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:f8:d7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206980, 'reachable_time': 33855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311984, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.748 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[979deb73-93b2-42e0-b1a8-743067cc5b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.820 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8fad0057-406f-4046-9f8f-09a92f9afeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.821 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d02cf1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.822 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.823 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45d02cf1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.825 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost kernel: device tap45d02cf1-f0 entered promiscuous mode Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.828 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.834 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45d02cf1-f0, col_values=(('external_ids', {'iface-id': '0999b431-c362-4180-a7a9-8664fe007369'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost ovn_controller[153778]: 2025-12-02T10:05:05Z|00111|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0) Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.837 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.840 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.841 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[53ff3763-2af8-4cad-8948-7895a47a9a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.842 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: global Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-45d02cf1-f511-4416-b7c1-b37c417f16f9 Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: user root Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: group root Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: log global Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID 45d02cf1-f511-4416-b7c1-b37c417f16f9 Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:05.843 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'env', 'PROCESS_TAG=haproxy-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45d02cf1-f511-4416-b7c1-b37c417f16f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:05:05 localhost nova_compute[281045]: 2025-12-02 10:05:05.848 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:06 localhost podman[312036]: Dec 2 05:05:06 localhost podman[312036]: 2025-12-02 10:05:06.242119388 +0000 UTC m=+0.077802253 container create bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:06 localhost systemd[1]: Started libpod-conmon-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope. Dec 2 05:05:06 localhost systemd[1]: tmp-crun.F0bET5.mount: Deactivated successfully. Dec 2 05:05:06 localhost systemd[1]: Started libcrun container. Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.278 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.280 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Started (Lifecycle Event)#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.282 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:05:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5520c5903efa84481bfea75a392635964a1c0dc18350544c148b6b54d47a4621/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.288 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.291 281049 INFO nova.virt.libvirt.driver [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance spawned successfully.#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.292 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 2 05:05:06 localhost podman[312036]: 2025-12-02 10:05:06.292823858 +0000 UTC m=+0.128506723 container init bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:05:06 localhost podman[312036]: 2025-12-02 10:05:06.198121676 +0000 UTC m=+0.033804601 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.301 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:06 localhost podman[312036]: 2025-12-02 10:05:06.30331425 +0000 UTC m=+0.138997115 container start bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.306 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.320 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE] (312061) : New worker (312063) forked Dec 2 05:05:06 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE] (312061) : Loading success. Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.323 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.324 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.325 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.326 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.327 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.333 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.333 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.334 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Paused (Lifecycle Event)#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.362 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.365 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.366 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.387 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.391 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.401 281049 INFO nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 6.28 seconds to spawn the instance on the hypervisor.#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.402 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.414 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.474 281049 INFO nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 7.23 seconds to build instance.#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.492 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 7.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.708 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.709 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:06 localhost nova_compute[281045]: 2025-12-02 10:05:06.709 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:05:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:05:06 Dec 2 05:05:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:05:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:05:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['backups', 'manila_data', 'vms', '.mgr', 'images', 'volumes', 'manila_metadata'] Dec 2 05:05:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:06 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004807566478224456 of space, bias 1.0, pg target 0.9615132956448912 quantized to 32 (current 32) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430047372278057 of space, bias 1.0, pg target 0.8572277620742602 quantized to 32 (current 32) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:05:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16) Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:05:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:05:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e110 e110: 6 total, 6 up, 6 in Dec 2 05:05:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.727 281049 DEBUG nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.728 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.729 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.729 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.730 281049 DEBUG nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:05:07 localhost nova_compute[281045]: 2025-12-02 10:05:07.730 281049 WARNING nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received unexpected event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with vm_state active and task_state None.#033[00m Dec 2 05:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:05:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:05:08 localhost systemd[1]: tmp-crun.rp1vlC.mount: Deactivated successfully. Dec 2 05:05:08 localhost systemd[1]: tmp-crun.uqV21J.mount: Deactivated successfully. Dec 2 05:05:08 localhost podman[312073]: 2025-12-02 10:05:08.644924091 +0000 UTC m=+0.129098111 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:05:08 localhost podman[312072]: 2025-12-02 10:05:08.605297312 +0000 UTC m=+0.090412221 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:08 localhost podman[312074]: 2025-12-02 10:05:08.654359681 +0000 UTC m=+0.132832916 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:05:08 localhost podman[312074]: 2025-12-02 10:05:08.666801334 +0000 UTC m=+0.145274579 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 2 05:05:08 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:05:08 localhost podman[312075]: 2025-12-02 10:05:08.705287247 +0000 UTC m=+0.180287815 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:05:08 localhost podman[312072]: 2025-12-02 10:05:08.738528659 +0000 UTC m=+0.223643608 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:05:08 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:05:08 localhost podman[312073]: 2025-12-02 10:05:08.758747932 +0000 UTC m=+0.242922002 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:05:08 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:05:08 localhost podman[312075]: 2025-12-02 10:05:08.79542565 +0000 UTC m=+0.270426238 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:05:08 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:05:08 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 145 op/s Dec 2 05:05:09 localhost nova_compute[281045]: 2025-12-02 10:05:09.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 e111: 6 total, 6 up, 6 in Dec 2 05:05:10 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 23 KiB/s wr, 86 op/s Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:11 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:11.865 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:10Z, description=, device_id=1ad64abe-8977-48b7-83a3-2b942dce5ba9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a6214a1b-e83a-4e97-8baa-487aca9c15e4, ip_allocation=immediate, mac_address=fa:16:3e:e0:ca:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=794, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:05:11Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.974 281049 DEBUG nova.compute.manager [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.975 281049 DEBUG nova.compute.manager [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing instance network info cache due to event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.976 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.976 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:05:11 localhost nova_compute[281045]: 2025-12-02 10:05:11.977 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:05:12 localhost openstack_network_exporter[241816]: ERROR 10:05:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:12 localhost openstack_network_exporter[241816]: ERROR 10:05:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:12 localhost openstack_network_exporter[241816]: ERROR 10:05:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:05:12 localhost openstack_network_exporter[241816]: ERROR 10:05:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:05:12 localhost openstack_network_exporter[241816]: Dec 2 05:05:12 localhost openstack_network_exporter[241816]: ERROR 10:05:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:05:12 localhost openstack_network_exporter[241816]: Dec 2 05:05:12 localhost systemd[1]: tmp-crun.EiORcO.mount: Deactivated successfully. Dec 2 05:05:12 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses Dec 2 05:05:12 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:12 localhost podman[312169]: 2025-12-02 10:05:12.251253324 +0000 UTC m=+0.079055072 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:12 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:12 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:12.716 262347 INFO neutron.agent.dhcp.agent [None req-4a24a7ce-3642-4068-9348-3d5c3eb3f10e - - - - - -] DHCP configuration for ports {'a6214a1b-e83a-4e97-8baa-487aca9c15e4'} is completed#033[00m Dec 2 05:05:12 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s Dec 2 05:05:13 localhost nova_compute[281045]: 2025-12-02 10:05:13.824 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updated VIF entry in instance network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:05:13 localhost nova_compute[281045]: 2025-12-02 10:05:13.825 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:05:14 localhost nova_compute[281045]: 2025-12-02 10:05:14.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:14 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s Dec 2 05:05:15 localhost nova_compute[281045]: 2025-12-02 10:05:15.202 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:05:15 localhost ovn_controller[153778]: 2025-12-02T10:05:15Z|00112|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0) Dec 2 05:05:16 localhost nova_compute[281045]: 2025-12-02 10:05:16.002 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:16 localhost nova_compute[281045]: 2025-12-02 10:05:16.022 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:16 localhost systemd[1]: tmp-crun.zOFsQU.mount: Deactivated successfully. Dec 2 05:05:16 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:05:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:16 localhost podman[312206]: 2025-12-02 10:05:16.098559079 +0000 UTC m=+0.070058755 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:16 localhost nova_compute[281045]: 2025-12-02 10:05:16.665 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:16 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 116 op/s Dec 2 05:05:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:05:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:05:18 localhost podman[312229]: 2025-12-02 10:05:18.080498188 +0000 UTC m=+0.081842138 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.) Dec 2 05:05:18 localhost podman[312229]: 2025-12-02 10:05:18.095734547 +0000 UTC m=+0.097078417 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, version=9.6) Dec 2 05:05:18 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:05:18 localhost systemd[1]: tmp-crun.j7WSSm.mount: Deactivated successfully. Dec 2 05:05:18 localhost podman[312228]: 2025-12-02 10:05:18.152799612 +0000 UTC m=+0.153643826 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:05:18 localhost podman[312228]: 2025-12-02 10:05:18.167985288 +0000 UTC m=+0.168829562 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:05:18 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:05:18 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 45 op/s Dec 2 05:05:19 localhost nova_compute[281045]: 2025-12-02 10:05:19.755 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:19 localhost nova_compute[281045]: 2025-12-02 10:05:19.796 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:05:20 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:05:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:05:20 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:05:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:05:20 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:05:20 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:05:20 localhost ceph-mgr[287188]: [progress INFO root] Completed event 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:05:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:05:20 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:05:20 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:05:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:20 localhost podman[312353]: 2025-12-02 10:05:20.514792279 +0000 UTC m=+0.051485624 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:05:20 localhost ovn_controller[153778]: 2025-12-02T10:05:20Z|00113|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0) Dec 2 05:05:20 localhost nova_compute[281045]: 2025-12-02 10:05:20.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:20 localhost ovn_controller[153778]: 2025-12-02T10:05:20Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:9d:c1 10.100.0.12 Dec 2 05:05:20 localhost ovn_controller[153778]: 2025-12-02T10:05:20Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:9d:c1 10.100.0.12 Dec 2 05:05:20 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 44 op/s Dec 2 05:05:21 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:05:21 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:05:21 localhost nova_compute[281045]: 2025-12-02 10:05:21.709 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:22 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:05:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:05:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:22 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:05:22 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s Dec 2 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:05:24 localhost podman[312391]: 2025-12-02 10:05:24.07843817 +0000 UTC m=+0.079600799 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:24 localhost podman[312391]: 2025-12-02 10:05:24.116589253 +0000 UTC m=+0.117751892 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true) Dec 2 05:05:24 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:05:24 localhost nova_compute[281045]: 2025-12-02 10:05:24.801 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:24 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s Dec 2 05:05:26 localhost nova_compute[281045]: 2025-12-02 10:05:26.713 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:26 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s Dec 2 05:05:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:27.744 159597 DEBUG eventlet.wsgi.server [-] (159597) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:27.746 159597 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: Accept: */*#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: Connection: close#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: Content-Type: text/plain#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: Host: 169.254.169.254#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: User-Agent: curl/7.84.0#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: X-Forwarded-For: 10.100.0.12#015 Dec 2 05:05:27 localhost ovn_metadata_agent[159477]: X-Ovn-Network-Id: 45d02cf1-f511-4416-b7c1-b37c417f16f9 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 05:05:28 localhost ovn_controller[153778]: 2025-12-02T10:05:28Z|00114|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0) Dec 2 05:05:28 localhost nova_compute[281045]: 2025-12-02 10:05:28.203 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:28 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:05:28 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:28 localhost podman[312426]: 2025-12-02 10:05:28.206112736 +0000 UTC m=+0.098173000 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:28 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:28 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s Dec 2 05:05:29 localhost nova_compute[281045]: 2025-12-02 10:05:29.803 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost haproxy-metadata-proxy-45d02cf1-f511-4416-b7c1-b37c417f16f9[312063]: 10.100.0.12:53634 [02/Dec/2025:10:05:27.743] listener listener/metadata 0/0/0/2378/2378 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.121 159597 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.122 159597 INFO eventlet.wsgi.server [-] 10.100.0.12, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1673 time: 2.3756862#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.243 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.244 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.244 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.245 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.245 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.247 281049 INFO nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Terminating instance#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.248 281049 DEBUG nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 2 05:05:30 localhost kernel: device tapa0a73e76-68 left promiscuous mode Dec 2 05:05:30 localhost NetworkManager[5967]: [1764669930.3291] device (tapa0a73e76-68): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:05:30 localhost ovn_controller[153778]: 2025-12-02T10:05:30Z|00115|binding|INFO|Releasing lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 from this chassis (sb_readonly=0) Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.359 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost ovn_controller[153778]: 2025-12-02T10:05:30Z|00116|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 down in Southbound Dec 2 05:05:30 localhost ovn_controller[153778]: 2025-12-02T10:05:30Z|00117|binding|INFO|Removing iface tapa0a73e76-68 ovn-installed in OVS Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.363 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.368 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9d:c1 10.100.0.12'], port_security=['fa:16:3e:16:9d:c1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'abf8d33c-4e24-4d26-af41-b01c828c67e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=a0a73e76-685f-4ba0-87b5-5dd27b54fab4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.369 159483 INFO neutron.agent.ovn.metadata.agent [-] Port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 unbound from our chassis#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.370 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45d02cf1-f511-4416-b7c1-b37c417f16f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.371 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe9994f-d7c2-4cb0-a4bc-5d29824c2ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.371 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 namespace which is not needed anymore#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.375 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Deactivated successfully. Dec 2 05:05:30 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Consumed 14.122s CPU time. Dec 2 05:05:30 localhost systemd-machined[202765]: Machine qemu-5-instance-0000000a terminated. Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.472 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.477 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.489 281049 INFO nova.virt.libvirt.driver [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance destroyed successfully.#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.491 281049 DEBUG nova.objects.instance [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'resources' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.507 281049 DEBUG nova.virt.libvirt.vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=,launch_index=0,launched_at=2025-12-02T10:05:06Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:05:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.508 281049 DEBUG nova.network.os_vif_util [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.509 281049 DEBUG nova.network.os_vif_util [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.509 281049 DEBUG os_vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.511 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.512 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0a73e76-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.513 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.516 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.518 281049 INFO os_vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68')#033[00m Dec 2 05:05:30 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE] (312061) : haproxy version is 2.8.14-c23fe91 Dec 2 05:05:30 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE] (312061) : path to executable is /usr/sbin/haproxy Dec 2 05:05:30 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [WARNING] (312061) : Exiting Master process... Dec 2 05:05:30 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [ALERT] (312061) : Current worker (312063) exited with code 143 (Terminated) Dec 2 05:05:30 localhost neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [WARNING] (312061) : All workers exited. Exiting... (0) Dec 2 05:05:30 localhost systemd[1]: libpod-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope: Deactivated successfully. Dec 2 05:05:30 localhost podman[312471]: 2025-12-02 10:05:30.55241396 +0000 UTC m=+0.081188187 container died bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay-5520c5903efa84481bfea75a392635964a1c0dc18350544c148b6b54d47a4621-merged.mount: Deactivated successfully. Dec 2 05:05:30 localhost podman[312471]: 2025-12-02 10:05:30.591134161 +0000 UTC m=+0.119908328 container cleanup bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:05:30 localhost podman[312508]: 2025-12-02 10:05:30.617664397 +0000 UTC m=+0.061260694 container cleanup bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:05:30 localhost systemd[1]: libpod-conmon-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope: Deactivated successfully. Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.630 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.632 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:05:30 localhost podman[312526]: 2025-12-02 10:05:30.689662341 +0000 UTC m=+0.081221549 container remove bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.693 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d5926755-ff09-4456-a791-7d44ecb1da11]: (4, ('Tue Dec 2 10:05:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 (bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241)\nbd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241\nTue Dec 2 10:05:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 (bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241)\nbd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.696 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9974c7cc-7aa1-421f-b895-5221962c29ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.697 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d02cf1-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost kernel: device tap45d02cf1-f0 left promiscuous mode Dec 2 05:05:30 localhost nova_compute[281045]: 2025-12-02 10:05:30.710 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.713 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[626fcfc3-fee1-46e8-952f-9968fa860a4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.726 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd88f42-d911-4f84-ae0c-8882ac582ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.728 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4a936d11-4d83-4184-9fcb-f6170d0a8ecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.739 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c227989c-aebf-476d-90f2-9314faa20104]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206974, 'reachable_time': 21153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312543, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.741 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:05:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:30.741 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb53196-8b04-421e-a35c-05d7752315af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:30 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.206 281049 INFO nova.virt.libvirt.driver [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deleting instance files /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0_del#033[00m Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.207 281049 INFO nova.virt.libvirt.driver [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deletion of /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0_del complete#033[00m Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.298 281049 INFO nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.298 281049 DEBUG oslo.service.loopingcall [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.299 281049 DEBUG nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.299 281049 DEBUG nova.network.neutron [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 2 05:05:31 localhost systemd[1]: run-netns-ovnmeta\x2d45d02cf1\x2df511\x2d4416\x2db7c1\x2db37c417f16f9.mount: Deactivated successfully. Dec 2 05:05:31 localhost nova_compute[281045]: 2025-12-02 10:05:31.729 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:32 localhost neutron_sriov_agent[255428]: 2025-12-02 10:05:32.561 2 INFO neutron.agent.securitygroups_rpc [req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 req-ec3b1f9e-8373-4159-93fc-d4de0998f605 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.662 281049 DEBUG nova.network.neutron [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.682 281049 INFO nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 1.38 seconds to deallocate network for instance.#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.699 281049 DEBUG nova.compute.manager [req-10bc4d71-e39c-459c-bfd8-3fa1c5c9d909 req-b8ecff71-8327-428a-84bb-293f464d942f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-deleted-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.743 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.743 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.793 281049 DEBUG oslo_concurrency.processutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.966 281049 DEBUG nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.968 281049 DEBUG nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:05:32 localhost nova_compute[281045]: 2025-12-02 10:05:32.968 281049 WARNING nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received unexpected event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with vm_state deleted and task_state None.#033[00m Dec 2 05:05:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s Dec 2 05:05:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:33 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2691401196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.248 281049 DEBUG oslo_concurrency.processutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.255 281049 DEBUG nova.compute.provider_tree [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.277 281049 DEBUG nova.scheduler.client.report [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.304 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.335 281049 INFO nova.scheduler.client.report [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Deleted allocations for instance abf8d33c-4e24-4d26-af41-b01c828c67e0#033[00m Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.403 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:33 localhost podman[239757]: time="2025-12-02T10:05:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:05:33 localhost podman[239757]: @ - - [02/Dec/2025:10:05:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:05:33 localhost nova_compute[281045]: 2025-12-02 10:05:33.675 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:33.676 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:33.678 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:05:33 localhost podman[239757]: @ - - [02/Dec/2025:10:05:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1" Dec 2 05:05:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Dec 2 05:05:35 localhost nova_compute[281045]: 2025-12-02 10:05:35.515 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:36 localhost nova_compute[281045]: 2025-12-02 10:05:36.765 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:05:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:05:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Dec 2 05:05:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:37 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:05:37 localhost podman[312585]: 2025-12-02 10:05:37.561107675 +0000 UTC m=+0.051153164 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:05:37 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:37 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:37 localhost nova_compute[281045]: 2025-12-02 10:05:37.644 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:05:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:05:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:05:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s Dec 2 05:05:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:05:39 localhost podman[312609]: 2025-12-02 10:05:39.091754186 +0000 UTC m=+0.079477635 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.140949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939141014, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2494, "num_deletes": 256, "total_data_size": 3266003, "memory_usage": 3315520, "flush_reason": "Manual Compaction"} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 2 05:05:39 localhost systemd[1]: tmp-crun.NrPpdt.mount: Deactivated successfully. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939156245, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2111274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17047, "largest_seqno": 19536, "table_properties": {"data_size": 2102355, "index_size": 5489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19897, "raw_average_key_size": 21, "raw_value_size": 2083908, "raw_average_value_size": 2205, "num_data_blocks": 241, "num_entries": 945, "num_filter_entries": 945, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669769, "oldest_key_time": 1764669769, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15340 microseconds, and 5843 cpu microseconds. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.156292) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2111274 bytes OK Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.156318) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158137) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158162) EVENT_LOG_v1 {"time_micros": 1764669939158156, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3254733, prev total WAL file size 3254733, number of live WAL files 2. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.159365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2061KB)], [24(16MB)] Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939159479, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19278954, "oldest_snapshot_seqno": -1} Dec 2 05:05:39 localhost podman[312609]: 2025-12-02 10:05:39.182996852 +0000 UTC m=+0.170720301 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 05:05:39 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:05:39 localhost podman[312607]: 2025-12-02 10:05:39.198485308 +0000 UTC m=+0.195919435 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12431 keys, 16638727 bytes, temperature: kUnknown Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939255275, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16638727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16567706, "index_size": 38856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 333168, "raw_average_key_size": 26, "raw_value_size": 16355659, "raw_average_value_size": 1315, "num_data_blocks": 1480, "num_entries": 12431, "num_filter_entries": 12431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.255720) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16638727 bytes Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.258329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.0 rd, 173.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 16.4 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(17.0) write-amplify(7.9) OK, records in: 12960, records dropped: 529 output_compression: NoCompression Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.258366) EVENT_LOG_v1 {"time_micros": 1764669939258349, "job": 12, "event": "compaction_finished", "compaction_time_micros": 95931, "compaction_time_cpu_micros": 44149, "output_level": 6, "num_output_files": 1, "total_output_size": 16638727, "num_input_records": 12960, "num_output_records": 12431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939259030, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939261678, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.159258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost podman[312615]: 2025-12-02 10:05:39.163358918 +0000 UTC m=+0.145571847 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 2 05:05:39 localhost podman[312608]: 2025-12-02 10:05:39.272526835 +0000 UTC m=+0.262964967 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:05:39 localhost podman[312607]: 2025-12-02 10:05:39.280580053 +0000 UTC m=+0.278014180 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Dec 2 05:05:39 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:05:39 localhost podman[312615]: 2025-12-02 10:05:39.297108782 +0000 UTC m=+0.279321671 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:05:39 localhost podman[312608]: 2025-12-02 10:05:39.308852963 +0000 UTC m=+0.299291045 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:05:39 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:05:39 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:05:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:39.680 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:40 localhost nova_compute[281045]: 2025-12-02 10:05:40.517 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Dec 2 05:05:41 localhost nova_compute[281045]: 2025-12-02 10:05:41.791 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:42 localhost openstack_network_exporter[241816]: ERROR 10:05:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:05:42 localhost openstack_network_exporter[241816]: ERROR 10:05:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:42 localhost openstack_network_exporter[241816]: ERROR 10:05:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:42 localhost openstack_network_exporter[241816]: ERROR 10:05:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:05:42 localhost openstack_network_exporter[241816]: Dec 2 05:05:42 localhost openstack_network_exporter[241816]: ERROR 10:05:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:05:42 localhost openstack_network_exporter[241816]: Dec 2 05:05:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s Dec 2 05:05:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail Dec 2 05:05:45 localhost nova_compute[281045]: 2025-12-02 10:05:45.488 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:45 localhost nova_compute[281045]: 2025-12-02 10:05:45.489 281049 INFO nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:05:45 localhost nova_compute[281045]: 2025-12-02 10:05:45.521 281049 DEBUG nova.compute.manager [None req-9516d488-b71b-4427-a395-0218dbfe3eca - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:45 localhost nova_compute[281045]: 2025-12-02 10:05:45.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:46 localhost nova_compute[281045]: 2025-12-02 10:05:46.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail Dec 2 05:05:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e112 e112: 6 total, 6 up, 6 in Dec 2 05:05:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:05:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:05:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:05:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 7.1 KiB/s rd, 1.2 KiB/s wr, 10 op/s Dec 2 05:05:49 localhost podman[312692]: 2025-12-02 10:05:49.086940083 +0000 UTC m=+0.087784510 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:05:49 localhost podman[312692]: 2025-12-02 10:05:49.095973051 +0000 UTC m=+0.096817478 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:05:49 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:05:49 localhost systemd[1]: tmp-crun.Koyr2x.mount: Deactivated successfully. Dec 2 05:05:49 localhost podman[312693]: 2025-12-02 10:05:49.182098089 +0000 UTC m=+0.181196012 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 2 05:05:49 localhost podman[312693]: 2025-12-02 10:05:49.199909648 +0000 UTC m=+0.199007571 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:05:49 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:05:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:49.440 262347 INFO neutron.agent.linux.ip_lib [None req-2c160541-b8d6-4fe6-9f02-3e7f05aabdcf - - - - - -] Device tap53cc812e-e4 cannot be used as it has no MAC address#033[00m Dec 2 05:05:49 localhost nova_compute[281045]: 2025-12-02 10:05:49.461 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost kernel: device tap53cc812e-e4 entered promiscuous mode Dec 2 05:05:49 localhost NetworkManager[5967]: [1764669949.4696] manager: (tap53cc812e-e4): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Dec 2 05:05:49 localhost ovn_controller[153778]: 2025-12-02T10:05:49Z|00118|binding|INFO|Claiming lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa for this chassis. Dec 2 05:05:49 localhost ovn_controller[153778]: 2025-12-02T10:05:49Z|00119|binding|INFO|53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa: Claiming unknown Dec 2 05:05:49 localhost nova_compute[281045]: 2025-12-02 10:05:49.472 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost systemd-udevd[312744]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:49 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:49.478 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5b1c8ab-7d8a-4df0-87a7-0efffca70a64, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:49.481 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa in datapath ea526ff6-f129-410b-b41d-c614aa65ab89 bound to our chassis#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:49.482 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea526ff6-f129-410b-b41d-c614aa65ab89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:49.485 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab287f3-a202-4d71-af12-2fe3e92a1b05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost ovn_controller[153778]: 2025-12-02T10:05:49Z|00120|binding|INFO|Setting lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa ovn-installed in OVS Dec 2 05:05:49 localhost ovn_controller[153778]: 2025-12-02T10:05:49Z|00121|binding|INFO|Setting lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa up in Southbound Dec 2 05:05:49 localhost nova_compute[281045]: 2025-12-02 10:05:49.506 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost journal[229262]: ethtool ioctl error on tap53cc812e-e4: No such device Dec 2 05:05:49 localhost nova_compute[281045]: 2025-12-02 10:05:49.543 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost nova_compute[281045]: 2025-12-02 10:05:49.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:49.645 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:49Z, description=, device_id=798ad2c1-39c2-42cf-b43f-5f28ae054b5b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e5f7578c-6093-4cfa-893b-bf9285530f81, ip_allocation=immediate, mac_address=fa:16:3e:80:3f:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1009, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:05:49Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:05:49 localhost podman[312796]: 2025-12-02 10:05:49.863356989 +0000 UTC m=+0.045813179 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:05:49 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:05:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:05:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:05:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:50.072 262347 INFO neutron.agent.dhcp.agent [None req-b3734218-3da9-483f-bc04-d6eb76ad8c9c - - - - - -] DHCP configuration for ports {'e5f7578c-6093-4cfa-893b-bf9285530f81'} is completed#033[00m Dec 2 05:05:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e113 e113: 6 total, 6 up, 6 in Dec 2 05:05:50 localhost podman[312851]: Dec 2 05:05:50 localhost podman[312851]: 2025-12-02 10:05:50.374244191 +0000 UTC m=+0.101832403 container create c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:05:50 localhost systemd[1]: Started libpod-conmon-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope. Dec 2 05:05:50 localhost podman[312851]: 2025-12-02 10:05:50.326428771 +0000 UTC m=+0.054017043 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:05:50 localhost systemd[1]: Started libcrun container. Dec 2 05:05:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277e8dcddf9fb3f284bdf1370a104896fdcdf3617b2ea15ab092627caa367817/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:05:50 localhost podman[312851]: 2025-12-02 10:05:50.454568732 +0000 UTC m=+0.182156964 container init c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:05:50 localhost podman[312851]: 2025-12-02 10:05:50.463378102 +0000 UTC m=+0.190966334 container start c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:05:50 localhost dnsmasq[312871]: started, version 2.85 cachesize 150 Dec 2 05:05:50 localhost dnsmasq[312871]: DNS service limited to local subnets Dec 2 05:05:50 localhost dnsmasq[312871]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:05:50 localhost dnsmasq[312871]: warning: no upstream servers configured Dec 2 05:05:50 localhost dnsmasq-dhcp[312871]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Dec 2 05:05:50 localhost dnsmasq[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/addn_hosts - 0 addresses Dec 2 05:05:50 localhost dnsmasq-dhcp[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/host Dec 2 05:05:50 localhost dnsmasq-dhcp[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/opts Dec 2 05:05:50 localhost nova_compute[281045]: 2025-12-02 10:05:50.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:50.648 262347 INFO neutron.agent.dhcp.agent [None req-916945a0-bb02-4589-8d2d-b48f8ce87a93 - - - - - -] DHCP configuration for ports {'7e5719fd-341c-4709-ab71-aae4434cc175'} is completed#033[00m Dec 2 05:05:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 8.9 KiB/s rd, 1.5 KiB/s wr, 12 op/s Dec 2 05:05:51 localhost nova_compute[281045]: 2025-12-02 10:05:51.531 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:51 localhost nova_compute[281045]: 2025-12-02 10:05:51.835 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Dec 2 05:05:53 localhost dnsmasq[312871]: exiting on receipt of SIGTERM Dec 2 05:05:53 localhost podman[312890]: 2025-12-02 10:05:53.717286538 +0000 UTC m=+0.057898861 container kill c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:05:53 localhost ovn_controller[153778]: 2025-12-02T10:05:53Z|00122|binding|INFO|Removing iface tap53cc812e-e4 ovn-installed in OVS Dec 2 05:05:53 localhost ovn_controller[153778]: 2025-12-02T10:05:53Z|00123|binding|INFO|Removing lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa ovn-installed in OVS Dec 2 05:05:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:53.730 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a58a7c6f-60c0-401e-9cb3-444e772d3aeb with type ""#033[00m Dec 2 05:05:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:53.732 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5b1c8ab-7d8a-4df0-87a7-0efffca70a64, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:53.734 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa in datapath ea526ff6-f129-410b-b41d-c614aa65ab89 unbound from our chassis#033[00m Dec 2 05:05:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:53.736 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea526ff6-f129-410b-b41d-c614aa65ab89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:05:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:05:53.737 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[724e625f-f205-4a3d-8c8e-3239717ecb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:53 localhost systemd[1]: libpod-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope: Deactivated successfully. Dec 2 05:05:53 localhost nova_compute[281045]: 2025-12-02 10:05:53.766 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:53 localhost podman[312902]: 2025-12-02 10:05:53.786730864 +0000 UTC m=+0.056755466 container died c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:05:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:53 localhost podman[312902]: 2025-12-02 10:05:53.893988272 +0000 UTC m=+0.164012844 container cleanup c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:05:53 localhost systemd[1]: libpod-conmon-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope: Deactivated successfully. Dec 2 05:05:53 localhost podman[312914]: 2025-12-02 10:05:53.919826846 +0000 UTC m=+0.137065835 container remove c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:53 localhost nova_compute[281045]: 2025-12-02 10:05:53.931 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:53 localhost kernel: device tap53cc812e-e4 left promiscuous mode Dec 2 05:05:53 localhost nova_compute[281045]: 2025-12-02 10:05:53.944 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:54.070 262347 INFO neutron.agent.dhcp.agent [None req-fc5b86c3-8d59-4bf9-b63e-da03ae6e02d7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:05:54.255 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:05:54 localhost nova_compute[281045]: 2025-12-02 10:05:54.573 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:54 localhost podman[312933]: 2025-12-02 10:05:54.58230776 +0000 UTC m=+0.084306214 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 05:05:54 localhost podman[312933]: 2025-12-02 10:05:54.619575266 +0000 UTC m=+0.121573720 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:54 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:05:54 localhost systemd[1]: var-lib-containers-storage-overlay-277e8dcddf9fb3f284bdf1370a104896fdcdf3617b2ea15ab092627caa367817-merged.mount: Deactivated successfully. Dec 2 05:05:54 localhost systemd[1]: run-netns-qdhcp\x2dea526ff6\x2df129\x2d410b\x2db41d\x2dc614aa65ab89.mount: Deactivated successfully. Dec 2 05:05:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Dec 2 05:05:55 localhost nova_compute[281045]: 2025-12-02 10:05:55.391 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:55 localhost nova_compute[281045]: 2025-12-02 10:05:55.526 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:55 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 e114: 6 total, 6 up, 6 in Dec 2 05:05:56 localhost nova_compute[281045]: 2025-12-02 10:05:56.865 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 36 op/s Dec 2 05:05:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:57 localhost nova_compute[281045]: 2025-12-02 10:05:57.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 33 op/s Dec 2 05:05:59 localhost nova_compute[281045]: 2025-12-02 10:05:59.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:00 localhost nova_compute[281045]: 2025-12-02 10:06:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:00 localhost nova_compute[281045]: 2025-12-02 10:06:00.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:00 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:06:00 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:06:00 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:06:00 localhost podman[312969]: 2025-12-02 10:06:00.871501208 +0000 UTC m=+0.067441325 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:06:00 localhost systemd[1]: tmp-crun.01FkQT.mount: Deactivated successfully. Dec 2 05:06:00 localhost nova_compute[281045]: 2025-12-02 10:06:00.975 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 29 op/s Dec 2 05:06:01 localhost nova_compute[281045]: 2025-12-02 10:06:01.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:01 localhost nova_compute[281045]: 2025-12-02 10:06:01.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:01 localhost nova_compute[281045]: 2025-12-02 10:06:01.898 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:06:02 localhost nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:06:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:06:03 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/209252438' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:06:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.016 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:06:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.244 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.246 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11545MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.246 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.247 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:03 localhost podman[239757]: time="2025-12-02T10:06:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:06:03 localhost podman[239757]: @ - - [02/Dec/2025:10:06:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.660 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.661 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:06:03 localhost nova_compute[281045]: 2025-12-02 10:06:03.680 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:06:03 localhost podman[239757]: @ - - [02/Dec/2025:10:06:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1" Dec 2 05:06:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:06:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/354601536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:06:04 localhost nova_compute[281045]: 2025-12-02 10:06:04.127 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:06:04 localhost nova_compute[281045]: 2025-12-02 10:06:04.132 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:06:04 localhost nova_compute[281045]: 2025-12-02 10:06:04.151 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:06:04 localhost nova_compute[281045]: 2025-12-02 10:06:04.169 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:06:04 localhost nova_compute[281045]: 2025-12-02 10:06:04.169 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:06:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:06:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:06:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:06:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.171 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.172 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.173 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.189 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.190 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:05 localhost nova_compute[281045]: 2025-12-02 10:06:05.531 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:05.630 2 INFO neutron.agent.securitygroups_rpc [None req-616a401d-f858-48e0-bbb1-73e58fa51cbe 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']#033[00m Dec 2 05:06:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:05.896 2 INFO neutron.agent.securitygroups_rpc [None req-7c216765-b201-4648-8f14-301becf47f8c 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']#033[00m Dec 2 05:06:06 localhost nova_compute[281045]: 2025-12-02 10:06:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:06 localhost nova_compute[281045]: 2025-12-02 10:06:06.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:06:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:06.717 2 INFO neutron.agent.securitygroups_rpc [None req-008f26a2-2a9a-4275-8fd3-0db0ae3965dc 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:06:06 Dec 2 05:06:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:06:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:06:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', '.mgr', 'volumes', 'vms', 'backups', 'images', 'manila_metadata'] Dec 2 05:06:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:06:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:06.888 2 INFO neutron.agent.securitygroups_rpc [None req-2ee1f36b-3e28-45da-9995-f4334e4d09c3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:06 localhost nova_compute[281045]: 2025-12-02 10:06:06.947 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:06:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:06:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:06:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:07.147 2 INFO neutron.agent.securitygroups_rpc [None req-07b24d70-b40e-4b6b-a4d7-126ffe953c74 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:07.299 2 INFO neutron.agent.securitygroups_rpc [None req-44887692-98ed-4bee-8196-cf2b44c61f3b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:07.454 2 INFO neutron.agent.securitygroups_rpc [None req-b501e38f-a705-4a3f-a758-0a1e958e6279 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:07.591 2 INFO neutron.agent.securitygroups_rpc [None req-7ad226a4-da1c-44ea-8953-97466b6b7a50 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:07.947 2 INFO neutron.agent.securitygroups_rpc [None req-45db658a-901d-430c-aa11-8109c9f781eb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:08.147 2 INFO neutron.agent.securitygroups_rpc [None req-525377d0-23a2-43bc-9048-c1bbf6915b2f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:08.252 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:06:08Z, description=, device_id=60f2c6f6-f230-49f9-b983-bd94d1e33602, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2c681799-15f6-4f8c-b3e7-c3f2ec57646e, ip_allocation=immediate, mac_address=fa:16:3e:7f:05:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1118, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:06:08Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:06:08 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:08.366 2 INFO neutron.agent.securitygroups_rpc [None req-09e30383-7aff-4fc0-a180-6509153c799d 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:06:08 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:06:08 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:06:08 localhost podman[313051]: 2025-12-02 10:06:08.480799163 +0000 UTC m=+0.065593578 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:06:08 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:08.540 2 INFO neutron.agent.securitygroups_rpc [None req-0d565c71-9045-4969-b270-4f682b986cb3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:08.687 262347 INFO neutron.agent.dhcp.agent [None req-e6f9e950-c382-40ad-8b4a-ba9dfbaa032e - - - - - -] DHCP configuration for ports {'2c681799-15f6-4f8c-b3e7-c3f2ec57646e'} is completed#033[00m Dec 2 05:06:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:09 localhost nova_compute[281045]: 2025-12-02 10:06:09.129 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:09 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:09.731 2 INFO neutron.agent.securitygroups_rpc [None req-61a0e9aa-9477-43cf-af07-616f49d4b972 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['94ceebea-e233-4f36-9a23-49456abf3258']#033[00m Dec 2 05:06:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:06:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:06:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:06:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:06:10 localhost systemd[1]: tmp-crun.OcqfEk.mount: Deactivated successfully. Dec 2 05:06:10 localhost podman[313073]: 2025-12-02 10:06:10.102183074 +0000 UTC m=+0.099672836 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:10 localhost podman[313074]: 2025-12-02 10:06:10.064941019 +0000 UTC m=+0.062197933 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:06:10 localhost podman[313074]: 2025-12-02 10:06:10.148992594 +0000 UTC m=+0.146249578 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:06:10 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:06:10 localhost podman[313073]: 2025-12-02 10:06:10.184989841 +0000 UTC m=+0.182479643 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:06:10 localhost podman[313086]: 2025-12-02 10:06:10.169372071 +0000 UTC m=+0.153465221 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:06:10 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:06:10 localhost podman[313086]: 2025-12-02 10:06:10.26236002 +0000 UTC m=+0.246453180 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:10 localhost podman[313075]: 2025-12-02 10:06:10.271732709 +0000 UTC m=+0.260879735 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:06:10 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:06:10 localhost podman[313075]: 2025-12-02 10:06:10.293132616 +0000 UTC m=+0.282279652 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:06:10 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:06:10 localhost nova_compute[281045]: 2025-12-02 10:06:10.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:11 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:11.483 2 INFO neutron.agent.securitygroups_rpc [None req-a112143f-1afa-4cab-a314-c7a0cf01690b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']#033[00m Dec 2 05:06:11 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:11.736 2 INFO neutron.agent.securitygroups_rpc [None req-2651c95b-dd09-4d1c-945d-8112466f351e 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']#033[00m Dec 2 05:06:11 localhost nova_compute[281045]: 2025-12-02 10:06:11.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:12 localhost nova_compute[281045]: 2025-12-02 10:06:12.006 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:12 localhost openstack_network_exporter[241816]: ERROR 10:06:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:12 localhost openstack_network_exporter[241816]: ERROR 10:06:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:06:12 localhost openstack_network_exporter[241816]: ERROR 10:06:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:12 localhost openstack_network_exporter[241816]: ERROR 10:06:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:06:12 localhost openstack_network_exporter[241816]: Dec 2 05:06:12 localhost openstack_network_exporter[241816]: ERROR 10:06:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:06:12 localhost openstack_network_exporter[241816]: Dec 2 05:06:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:12 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:12.928 2 INFO neutron.agent.securitygroups_rpc [None req-b3b89eff-a637-4d13-a86c-dcfc347ff722 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']#033[00m Dec 2 05:06:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:13.105 2 INFO neutron.agent.securitygroups_rpc [None req-19b017d9-1c5d-463c-8fd1-6e01ac942ab6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']#033[00m Dec 2 05:06:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:13.582 2 INFO neutron.agent.securitygroups_rpc [None req-fdfabc3e-7f87-43e4-a533-8789992c1455 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:13.952 2 INFO neutron.agent.securitygroups_rpc [None req-64456867-c89f-4fe7-8d63-40c9e8a08ded e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:14.035 2 INFO neutron.agent.securitygroups_rpc [None req-5a1a208f-7619-41f5-8ff5-57ae95d40ae9 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:14.234 2 INFO neutron.agent.securitygroups_rpc [None req-ba5eb572-7899-4268-a4d4-30f2b4a8108a 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:14.414 2 INFO neutron.agent.securitygroups_rpc [None req-9ba9d0cb-ece7-40eb-b408-fce3b926db2b e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:14.449 2 INFO neutron.agent.securitygroups_rpc [None req-1dee4ff0-8cdb-4950-b86d-3d1f17272691 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:14.868 2 INFO neutron.agent.securitygroups_rpc [None req-c68f94c3-1ecf-4886-9e93-85ed57a0441f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:15 localhost nova_compute[281045]: 2025-12-02 10:06:15.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:15 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:15.566 2 INFO neutron.agent.securitygroups_rpc [None req-301242b3-4b9d-48c8-8e81-d6b06b4fcc41 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:15 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:15.600 2 INFO neutron.agent.securitygroups_rpc [None req-296124d6-ee7a-42c7-8fb1-fa352c7e4ccb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:17 localhost nova_compute[281045]: 2025-12-02 10:06:17.008 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:17 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:17.264 2 INFO neutron.agent.securitygroups_rpc [None req-bb5bb253-d3fa-4182-a08a-6c77d73857f6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['dc8aaaaf-7a11-4a4d-8334-5511e0a6c147']#033[00m Dec 2 05:06:17 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:17.420 2 INFO neutron.agent.securitygroups_rpc [None req-c73fb2ed-db0d-4633-bf8b-3646a66fbf65 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:19 localhost nova_compute[281045]: 2025-12-02 10:06:19.659 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:19 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:06:19 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:06:19 localhost systemd[1]: tmp-crun.5MCw1Q.mount: Deactivated successfully. Dec 2 05:06:19 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:06:19 localhost podman[313175]: 2025-12-02 10:06:19.731819559 +0000 UTC m=+0.073483090 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:06:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:06:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:06:19 localhost podman[313190]: 2025-12-02 10:06:19.866327886 +0000 UTC m=+0.099375997 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:06:19 localhost podman[313191]: 2025-12-02 10:06:19.91848356 +0000 UTC m=+0.147403934 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:06:19 localhost podman[313191]: 2025-12-02 10:06:19.931530361 +0000 UTC m=+0.160450745 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41) Dec 2 05:06:19 localhost podman[313190]: 2025-12-02 10:06:19.932001746 +0000 UTC m=+0.165049847 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:06:19 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:06:19 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:06:20 localhost nova_compute[281045]: 2025-12-02 10:06:20.539 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:06:21 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:06:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:06:21 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:06:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:06:21 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:06:21 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:06:21 localhost ceph-mgr[287188]: [progress INFO root] Completed event 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:06:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:06:21 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:06:21 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:06:21 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:06:22 localhost nova_compute[281045]: 2025-12-02 10:06:22.066 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:22 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:06:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:06:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:23 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:06:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:25 localhost systemd[1]: tmp-crun.1ztNir.mount: Deactivated successfully. Dec 2 05:06:25 localhost podman[313324]: 2025-12-02 10:06:25.084801047 +0000 UTC m=+0.086016006 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:06:25 localhost podman[313324]: 2025-12-02 10:06:25.092303618 +0000 UTC m=+0.093518567 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:06:25 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:06:25 localhost nova_compute[281045]: 2025-12-02 10:06:25.542 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:26.457 2 INFO neutron.agent.securitygroups_rpc [None req-55db991b-3b52-42dd-b07a-80ab3fefc470 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']#033[00m Dec 2 05:06:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:26.625 2 INFO neutron.agent.securitygroups_rpc [None req-3231b6f4-a459-453f-8b66-50672725d94d ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']#033[00m Dec 2 05:06:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.107 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:27.566 262347 INFO neutron.agent.linux.ip_lib [None req-953882f4-31f1-4474-9600-974a5d01de43 - - - - - -] Device tap21e6c00c-53 cannot be used as it has no MAC address#033[00m Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost kernel: device tap21e6c00c-53 entered promiscuous mode Dec 2 05:06:27 localhost NetworkManager[5967]: [1764669987.5920] manager: (tap21e6c00c-53): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost ovn_controller[153778]: 2025-12-02T10:06:27Z|00124|binding|INFO|Claiming lport 21e6c00c-53a4-4738-8a05-387fdaa114da for this chassis. Dec 2 05:06:27 localhost ovn_controller[153778]: 2025-12-02T10:06:27Z|00125|binding|INFO|21e6c00c-53a4-4738-8a05-387fdaa114da: Claiming unknown Dec 2 05:06:27 localhost systemd-udevd[313352]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:27.602 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23a6fa6a-5ec4-4b90-b32a-df62162296c7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21e6c00c-53a4-4738-8a05-387fdaa114da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:27.604 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21e6c00c-53a4-4738-8a05-387fdaa114da in datapath e6b3959d-7904-44ed-92bd-ec1be2b402a9 bound to our chassis#033[00m Dec 2 05:06:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:27.605 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e6b3959d-7904-44ed-92bd-ec1be2b402a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:27.606 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e326871f-e926-49a4-a6e3-aae5278a370c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:27 localhost ovn_controller[153778]: 2025-12-02T10:06:27Z|00126|binding|INFO|Setting lport 21e6c00c-53a4-4738-8a05-387fdaa114da ovn-installed in OVS Dec 2 05:06:27 localhost ovn_controller[153778]: 2025-12-02T10:06:27Z|00127|binding|INFO|Setting lport 21e6c00c-53a4-4738-8a05-387fdaa114da up in Southbound Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.635 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost nova_compute[281045]: 2025-12-02 10:06:27.703 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:27 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:27.732 2 INFO neutron.agent.securitygroups_rpc [None req-c8b7833a-90ae-4c85-a76b-1cc28b8de3de ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:28.064 2 INFO neutron.agent.securitygroups_rpc [None req-7e69b2ff-a5bb-4d6e-b782-47cf6529c7b2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:28.202 2 INFO neutron.agent.securitygroups_rpc [None req-01786b72-4967-4664-b0a8-c04c5ee043aa ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:28.410 2 INFO neutron.agent.securitygroups_rpc [None req-660a054c-2d13-4375-bdeb-e4f5c9661010 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost podman[313405]: Dec 2 05:06:28 localhost podman[313405]: 2025-12-02 10:06:28.488287283 +0000 UTC m=+0.086878963 container create 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:06:28 localhost systemd[1]: Started libpod-conmon-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope. Dec 2 05:06:28 localhost podman[313405]: 2025-12-02 10:06:28.445443125 +0000 UTC m=+0.044034845 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:28 localhost systemd[1]: Started libcrun container. Dec 2 05:06:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537a4fb83ba17556b8b20d1c5f8dbe5ca7c6131c1a44ffe17c98b82654d3b13f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:28 localhost podman[313405]: 2025-12-02 10:06:28.560987378 +0000 UTC m=+0.159579058 container init 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:28 localhost podman[313405]: 2025-12-02 10:06:28.5747043 +0000 UTC m=+0.173295970 container start 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:06:28 localhost ovn_controller[153778]: 2025-12-02T10:06:28Z|00128|binding|INFO|Removing iface tap21e6c00c-53 ovn-installed in OVS Dec 2 05:06:28 localhost ovn_controller[153778]: 2025-12-02T10:06:28Z|00129|binding|INFO|Removing lport 21e6c00c-53a4-4738-8a05-387fdaa114da ovn-installed in OVS Dec 2 05:06:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:28.577 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port faf58706-f1ff-4f07-97fb-9bba4fb63b23 with type ""#033[00m Dec 2 05:06:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:28.579 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23a6fa6a-5ec4-4b90-b32a-df62162296c7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21e6c00c-53a4-4738-8a05-387fdaa114da) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:28.581 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21e6c00c-53a4-4738-8a05-387fdaa114da in datapath e6b3959d-7904-44ed-92bd-ec1be2b402a9 unbound from our chassis#033[00m Dec 2 05:06:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:28.581 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e6b3959d-7904-44ed-92bd-ec1be2b402a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:28 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:28.582 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ad0850-8a86-4899-ada1-a61fd4310b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:28 localhost dnsmasq[313423]: started, version 2.85 cachesize 150 Dec 2 05:06:28 localhost dnsmasq[313423]: DNS service limited to local subnets Dec 2 05:06:28 localhost dnsmasq[313423]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:28 localhost dnsmasq[313423]: warning: no upstream servers configured Dec 2 05:06:28 localhost dnsmasq-dhcp[313423]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:06:28 localhost dnsmasq[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/addn_hosts - 0 addresses Dec 2 05:06:28 localhost dnsmasq-dhcp[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/host Dec 2 05:06:28 localhost dnsmasq-dhcp[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/opts Dec 2 05:06:28 localhost nova_compute[281045]: 2025-12-02 10:06:28.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:28.616 2 INFO neutron.agent.securitygroups_rpc [None req-62dae022-f3c9-4be5-80ba-7d0557eedc03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:28.744 262347 INFO neutron.agent.dhcp.agent [None req-7f95caa9-976b-43ee-b5a8-51c6575fd272 - - - - - -] DHCP configuration for ports {'5b49bd3c-cffb-4469-b050-cedaa6445f9f'} is completed#033[00m Dec 2 05:06:28 localhost nova_compute[281045]: 2025-12-02 10:06:28.846 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:28.889 2 INFO neutron.agent.securitygroups_rpc [None req-ec75e4da-10e7-4730-8e6b-71c48e471048 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost podman[313439]: 2025-12-02 10:06:28.90312743 +0000 UTC m=+0.063916957 container kill 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:28 localhost dnsmasq[313423]: exiting on receipt of SIGTERM Dec 2 05:06:28 localhost systemd[1]: libpod-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope: Deactivated successfully. Dec 2 05:06:28 localhost podman[313452]: 2025-12-02 10:06:28.95127448 +0000 UTC m=+0.033609564 container died 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:06:28 localhost podman[313452]: 2025-12-02 10:06:28.994238272 +0000 UTC m=+0.076573376 container remove 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:06:29 localhost nova_compute[281045]: 2025-12-02 10:06:29.005 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:29 localhost kernel: device tap21e6c00c-53 left promiscuous mode Dec 2 05:06:29 localhost systemd[1]: libpod-conmon-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope: Deactivated successfully. Dec 2 05:06:29 localhost nova_compute[281045]: 2025-12-02 10:06:29.019 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:29.047 262347 INFO neutron.agent.dhcp.agent [None req-1f322521-9352-46a5-b051-e7de73180fca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:06:29.047 262347 INFO neutron.agent.dhcp.agent [None req-1f322521-9352-46a5-b051-e7de73180fca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:29.257 2 INFO neutron.agent.securitygroups_rpc [None req-405af18e-31af-407d-8854-f380b293accc ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:29 localhost systemd[1]: tmp-crun.2A2M0W.mount: Deactivated successfully. Dec 2 05:06:29 localhost systemd[1]: var-lib-containers-storage-overlay-537a4fb83ba17556b8b20d1c5f8dbe5ca7c6131c1a44ffe17c98b82654d3b13f-merged.mount: Deactivated successfully. Dec 2 05:06:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:29 localhost systemd[1]: run-netns-qdhcp\x2de6b3959d\x2d7904\x2d44ed\x2d92bd\x2dec1be2b402a9.mount: Deactivated successfully. Dec 2 05:06:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:29.524 2 INFO neutron.agent.securitygroups_rpc [None req-f8e3c35c-c137-4121-a943-a4b83494d8a2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:30 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:30.179 2 INFO neutron.agent.securitygroups_rpc [None req-9f2bf60d-db80-42ad-806a-1445118d8a03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:30 localhost nova_compute[281045]: 2025-12-02 10:06:30.544 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:30 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:30.786 2 INFO neutron.agent.securitygroups_rpc [None req-9a1f4b78-6988-4ca6-b0f0-b52a2438af33 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:31 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:31.765 2 INFO neutron.agent.securitygroups_rpc [None req-1c27ab60-bacc-4e9e-b1c9-d6f9cf0e1b32 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['8ebd5526-cfd6-4dd0-8888-3d40098feb1a']#033[00m Dec 2 05:06:32 localhost nova_compute[281045]: 2025-12-02 10:06:32.142 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:33.364 2 INFO neutron.agent.securitygroups_rpc [None req-ec9364dd-dd89-44e2-a668-097e6474f1a7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']#033[00m Dec 2 05:06:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:33.624 2 INFO neutron.agent.securitygroups_rpc [None req-37778547-7b0e-4196-bc87-08fdc55b8adf ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']#033[00m Dec 2 05:06:33 localhost podman[239757]: time="2025-12-02T10:06:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:06:33 localhost podman[239757]: @ - - [02/Dec/2025:10:06:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:06:33 localhost podman[239757]: @ - - [02/Dec/2025:10:06:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1" Dec 2 05:06:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:33.899 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:33 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:33.900 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:06:33 localhost nova_compute[281045]: 2025-12-02 10:06:33.950 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:35 localhost nova_compute[281045]: 2025-12-02 10:06:35.546 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:36.119 2 INFO neutron.agent.securitygroups_rpc [None req-4d405741-5ff5-4de3-bee3-cffdae397b25 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']#033[00m Dec 2 05:06:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:36.314 2 INFO neutron.agent.securitygroups_rpc [None req-b1eefb6d-0b2e-4576-bbe4-d313eb7d9799 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']#033[00m Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:06:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:06:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:36.982 2 INFO neutron.agent.securitygroups_rpc [None req-9e362cac-bb61-4369-ad00-9f073d908c17 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Dec 2 05:06:37 localhost nova_compute[281045]: 2025-12-02 10:06:37.174 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e115 e115: 6 total, 6 up, 6 in Dec 2 05:06:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:37.404 2 INFO neutron.agent.securitygroups_rpc [None req-24128e00-40c9-486f-b5be-6d4fcff90c40 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:37.944 2 INFO neutron.agent.securitygroups_rpc [None req-c72c5dd9-5995-4df5-a6ad-e067a8fdaf10 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:38.315 2 INFO neutron.agent.securitygroups_rpc [None req-1079cc4d-64e8-4687-8a8b-22fa9980bbe7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:38.598 2 INFO neutron.agent.securitygroups_rpc [None req-37425f6d-3678-4ce3-8643-3e657bae2eff ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:38.920 2 INFO neutron.agent.securitygroups_rpc [None req-5dbcc1aa-f594-4336-8b87-6f8ec002ed0b ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s Dec 2 05:06:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e116 e116: 6 total, 6 up, 6 in Dec 2 05:06:40 localhost nova_compute[281045]: 2025-12-02 10:06:40.548 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:40 localhost neutron_sriov_agent[255428]: 2025-12-02 10:06:40.868 2 INFO neutron.agent.securitygroups_rpc [None req-bbaaa1af-170e-49f9-ab51-fca299624b09 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['ef31c17a-e7e0-47e3-9c93-83c68ae18a93']#033[00m Dec 2 05:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:06:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:06:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s Dec 2 05:06:41 localhost podman[313483]: 2025-12-02 10:06:41.143789801 +0000 UTC m=+0.134265190 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:06:41 localhost podman[313479]: 2025-12-02 10:06:41.10897326 +0000 UTC m=+0.108650811 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:06:41 localhost podman[313479]: 2025-12-02 10:06:41.192832789 +0000 UTC m=+0.192510370 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:06:41 localhost podman[313483]: 2025-12-02 10:06:41.207279863 +0000 UTC m=+0.197755272 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 05:06:41 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:06:41 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:06:41 localhost podman[313480]: 2025-12-02 10:06:41.203104385 +0000 UTC m=+0.198271698 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:06:41 localhost podman[313480]: 2025-12-02 10:06:41.2842161 +0000 UTC m=+0.279383393 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:06:41 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:06:41 localhost podman[313481]: 2025-12-02 10:06:41.307190526 +0000 UTC m=+0.299603035 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Dec 2 05:06:41 localhost podman[313481]: 2025-12-02 10:06:41.316766421 +0000 UTC m=+0.309178920 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 05:06:41 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:06:41 localhost ovn_metadata_agent[159477]: 2025-12-02 10:06:41.902 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:06:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e117 e117: 6 total, 6 up, 6 in Dec 2 05:06:42 localhost openstack_network_exporter[241816]: ERROR 10:06:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:42 localhost openstack_network_exporter[241816]: ERROR 10:06:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:42 localhost openstack_network_exporter[241816]: ERROR 10:06:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:06:42 localhost openstack_network_exporter[241816]: ERROR 10:06:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:06:42 localhost openstack_network_exporter[241816]: Dec 2 05:06:42 localhost openstack_network_exporter[241816]: ERROR 10:06:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:06:42 localhost openstack_network_exporter[241816]: Dec 2 05:06:42 localhost nova_compute[281045]: 2025-12-02 10:06:42.176 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 11 MiB/s wr, 136 op/s Dec 2 05:06:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e118 e118: 6 total, 6 up, 6 in Dec 2 05:06:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s Dec 2 05:06:45 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e119 e119: 6 total, 6 up, 6 in Dec 2 05:06:45 localhost nova_compute[281045]: 2025-12-02 10:06:45.551 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s Dec 2 05:06:47 localhost nova_compute[281045]: 2025-12-02 10:06:47.209 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e120 e120: 6 total, 6 up, 6 in Dec 2 05:06:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e121 e121: 6 total, 6 up, 6 in Dec 2 05:06:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 152 KiB/s rd, 9.8 MiB/s wr, 216 op/s Dec 2 05:06:49 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e122 e122: 6 total, 6 up, 6 in Dec 2 05:06:49 localhost sshd[313565]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:06:50 localhost podman[313567]: 2025-12-02 10:06:50.092565678 +0000 UTC m=+0.090552395 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350) Dec 2 05:06:50 localhost podman[313567]: 2025-12-02 10:06:50.105875378 +0000 UTC m=+0.103862125 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9) Dec 2 05:06:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:06:50 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:06:50 localhost podman[313585]: 2025-12-02 10:06:50.188274371 +0000 UTC m=+0.069770906 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:06:50 localhost podman[313585]: 2025-12-02 10:06:50.226014562 +0000 UTC m=+0.107511057 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:06:50 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:06:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e123 e123: 6 total, 6 up, 6 in Dec 2 05:06:50 localhost nova_compute[281045]: 2025-12-02 10:06:50.553 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e124 e124: 6 total, 6 up, 6 in Dec 2 05:06:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 196 KiB/s rd, 13 MiB/s wr, 277 op/s Dec 2 05:06:52 localhost nova_compute[281045]: 2025-12-02 10:06:52.260 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e125 e125: 6 total, 6 up, 6 in Dec 2 05:06:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 5.5 KiB/s wr, 118 op/s Dec 2 05:06:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e126 e126: 6 total, 6 up, 6 in Dec 2 05:06:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 4.6 KiB/s wr, 99 op/s Dec 2 05:06:55 localhost nova_compute[281045]: 2025-12-02 10:06:55.556 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:06:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e127 e127: 6 total, 6 up, 6 in Dec 2 05:06:56 localhost podman[313608]: 2025-12-02 10:06:56.074949272 +0000 UTC m=+0.076000178 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:06:56 localhost podman[313608]: 2025-12-02 10:06:56.087747635 +0000 UTC m=+0.088798481 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:56 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:06:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.7 KiB/s wr, 78 op/s Dec 2 05:06:57 localhost nova_compute[281045]: 2025-12-02 10:06:57.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e128 e128: 6 total, 6 up, 6 in Dec 2 05:06:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s Dec 2 05:06:59 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e129 e129: 6 total, 6 up, 6 in Dec 2 05:06:59 localhost nova_compute[281045]: 2025-12-02 10:06:59.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:59 localhost nova_compute[281045]: 2025-12-02 10:06:59.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:00 localhost nova_compute[281045]: 2025-12-02 10:07:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:00 localhost nova_compute[281045]: 2025-12-02 10:07:00.559 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s Dec 2 05:07:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e130 e130: 6 total, 6 up, 6 in Dec 2 05:07:02 localhost nova_compute[281045]: 2025-12-02 10:07:02.299 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:02 localhost nova_compute[281045]: 2025-12-02 10:07:02.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 91 op/s Dec 2 05:07:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e131 e131: 6 total, 6 up, 6 in Dec 2 05:07:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:07:03 localhost nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:07:03 localhost podman[239757]: time="2025-12-02T10:07:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:07:03 localhost podman[239757]: @ - - [02/Dec/2025:10:07:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:07:03 localhost podman[239757]: @ - - [02/Dec/2025:10:07:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1" Dec 2 05:07:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:07:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/7662486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.032 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.273 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11538MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.358 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.384 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:07:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:07:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:07:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:07:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:07:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:07:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2800528967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.868 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.875 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.948 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.951 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:07:04 localhost nova_compute[281045]: 2025-12-02 10:07:04.951 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s Dec 2 05:07:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e132 e132: 6 total, 6 up, 6 in Dec 2 05:07:05 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 2 05:07:05 localhost nova_compute[281045]: 2025-12-02 10:07:05.562 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:05 localhost nova_compute[281045]: 2025-12-02 10:07:05.953 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:05 localhost nova_compute[281045]: 2025-12-02 10:07:05.954 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:07:05 localhost nova_compute[281045]: 2025-12-02 10:07:05.955 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:07:06 localhost nova_compute[281045]: 2025-12-02 10:07:06.243 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:07:06 localhost nova_compute[281045]: 2025-12-02 10:07:06.245 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:06.862 262347 INFO neutron.agent.linux.ip_lib [None req-584f5906-6980-460f-861a-05b08d948459 - - - - - -] Device tap03c51554-b0 cannot be used as it has no MAC address#033[00m Dec 2 05:07:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:07:06 Dec 2 05:07:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:07:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:07:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'backups', 'volumes', 'manila_data', 'vms'] Dec 2 05:07:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:07:06 localhost nova_compute[281045]: 2025-12-02 10:07:06.937 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:06 localhost kernel: device tap03c51554-b0 entered promiscuous mode Dec 2 05:07:06 localhost NetworkManager[5967]: [1764670026.9468] manager: (tap03c51554-b0): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Dec 2 05:07:06 localhost ovn_controller[153778]: 2025-12-02T10:07:06Z|00130|binding|INFO|Claiming lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d for this chassis. Dec 2 05:07:06 localhost ovn_controller[153778]: 2025-12-02T10:07:06Z|00131|binding|INFO|03c51554-b0d7-401d-888a-0d8ea49e9e4d: Claiming unknown Dec 2 05:07:06 localhost nova_compute[281045]: 2025-12-02 10:07:06.946 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:06 localhost systemd-udevd[313682]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:06.969 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1b0b592-a188-42df-896f-4a5386fe2db9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=03c51554-b0d7-401d-888a-0d8ea49e9e4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:06.970 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 03c51554-b0d7-401d-888a-0d8ea49e9e4d in datapath 0e36d60b-6052-4646-b258-2b7e0612d401 bound to our chassis#033[00m Dec 2 05:07:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:06.972 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2324623e-fa65-4407-965f-e7d64ab2f4e6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:07:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:06.972 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e36d60b-6052-4646-b258-2b7e0612d401, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:07:06 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:06.973 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8316f13e-3dd3-4b24-9852-53741d61103f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:06 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:06 localhost ovn_controller[153778]: 2025-12-02T10:07:06Z|00132|binding|INFO|Setting lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d ovn-installed in OVS Dec 2 05:07:06 localhost ovn_controller[153778]: 2025-12-02T10:07:06Z|00133|binding|INFO|Setting lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d up in Southbound Dec 2 05:07:06 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:06 localhost nova_compute[281045]: 2025-12-02 10:07:06.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:06 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost nova_compute[281045]: 2025-12-02 10:07:07.023 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:07 localhost journal[229262]: ethtool ioctl error on tap03c51554-b0: No such device Dec 2 05:07:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s Dec 2 05:07:07 localhost nova_compute[281045]: 2025-12-02 10:07:07.057 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021774090359203424 quantized to 32 (current 32) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:07:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:07:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:07:07 localhost nova_compute[281045]: 2025-12-02 10:07:07.302 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:08 localhost podman[313753]: Dec 2 05:07:08 localhost podman[313753]: 2025-12-02 10:07:08.15847778 +0000 UTC m=+0.084862651 container create c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:07:08 localhost systemd[1]: Started libpod-conmon-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope. Dec 2 05:07:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:08.206 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2324623e-fa65-4407-965f-e7d64ab2f4e6 with type ""#033[00m Dec 2 05:07:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:08.208 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1b0b592-a188-42df-896f-4a5386fe2db9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=03c51554-b0d7-401d-888a-0d8ea49e9e4d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:08 localhost ovn_controller[153778]: 2025-12-02T10:07:08Z|00134|binding|INFO|Removing iface tap03c51554-b0 ovn-installed in OVS Dec 2 05:07:08 localhost ovn_controller[153778]: 2025-12-02T10:07:08Z|00135|binding|INFO|Removing lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d ovn-installed in OVS Dec 2 05:07:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:08.212 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 03c51554-b0d7-401d-888a-0d8ea49e9e4d in datapath 0e36d60b-6052-4646-b258-2b7e0612d401 unbound from our chassis#033[00m Dec 2 05:07:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:08.214 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e36d60b-6052-4646-b258-2b7e0612d401, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:07:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:08.215 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c97e17a5-ad00-4ece-b179-1eeaf242154b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:08 localhost podman[313753]: 2025-12-02 10:07:08.116727246 +0000 UTC m=+0.043112137 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:08 localhost nova_compute[281045]: 2025-12-02 10:07:08.247 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:08 localhost systemd[1]: Started libcrun container. Dec 2 05:07:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6410f4a3cbc4419ce0d2d2f282ad5265db5a535e7cfdaa1501af2234dd5190fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:08 localhost podman[313753]: 2025-12-02 10:07:08.280678078 +0000 UTC m=+0.207062949 container init c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:07:08 localhost podman[313753]: 2025-12-02 10:07:08.292990786 +0000 UTC m=+0.219375667 container start c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:07:08 localhost dnsmasq[313771]: started, version 2.85 cachesize 150 Dec 2 05:07:08 localhost dnsmasq[313771]: DNS service limited to local subnets Dec 2 05:07:08 localhost dnsmasq[313771]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:08 localhost dnsmasq[313771]: warning: no upstream servers configured Dec 2 05:07:08 localhost dnsmasq-dhcp[313771]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:07:08 localhost dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses Dec 2 05:07:08 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host Dec 2 05:07:08 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts Dec 2 05:07:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:08.508 262347 INFO neutron.agent.dhcp.agent [None req-0eab09cc-2c3f-4379-b687-e0dfb32c385f - - - - - -] DHCP configuration for ports {'6bb8688d-8be9-4786-8741-458afd004055'} is completed#033[00m Dec 2 05:07:08 localhost nova_compute[281045]: 2025-12-02 10:07:08.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:08 localhost nova_compute[281045]: 2025-12-02 10:07:08.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:07:08 localhost dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses Dec 2 05:07:08 localhost podman[313787]: 2025-12-02 10:07:08.656083343 +0000 UTC m=+0.050325709 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:07:08 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host Dec 2 05:07:08 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts Dec 2 05:07:08 localhost nova_compute[281045]: 2025-12-02 10:07:08.810 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:08 localhost kernel: device tap03c51554-b0 left promiscuous mode Dec 2 05:07:08 localhost nova_compute[281045]: 2025-12-02 10:07:08.830 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 2.4 KiB/s wr, 61 op/s Dec 2 05:07:09 localhost systemd[1]: tmp-crun.mujDJm.mount: Deactivated successfully. Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.554 262347 INFO neutron.agent.dhcp.agent [None req-03e95dfb-f9cb-4bdd-956a-d5a5fc422372 - - - - - -] DHCP configuration for ports {'6bb8688d-8be9-4786-8741-458afd004055'} is completed#033[00m Dec 2 05:07:09 localhost dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses Dec 2 05:07:09 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host Dec 2 05:07:09 localhost dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts Dec 2 05:07:09 localhost podman[313827]: 2025-12-02 10:07:09.736197498 +0000 UTC m=+0.063406310 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent [None req-4e182ea0-0e96-4cf5-b907-fabc3c917168 - - - - - -] Unable to reload_allocations dhcp for 0e36d60b-6052-4646-b258-2b7e0612d401.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap03c51554-b0 not found in namespace qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401. Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap03c51554-b0 not found in namespace qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401. Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:07:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.761 262347 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 05:07:09 localhost nova_compute[281045]: 2025-12-02 10:07:09.978 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.041 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.042 262347 INFO neutron.agent.dhcp.agent [-] Starting network 0e36d60b-6052-4646-b258-2b7e0612d401 dhcp configuration#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Finished network 0e36d60b-6052-4646-b258-2b7e0612d401 dhcp configuration#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Starting network 10e86610-feac-4352-ad95-9bedaf95124c dhcp configuration#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Finished network 10e86610-feac-4352-ad95-9bedaf95124c dhcp configuration#033[00m Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:07:10 localhost dnsmasq[313771]: exiting on receipt of SIGTERM Dec 2 05:07:10 localhost podman[313857]: 2025-12-02 10:07:10.228603251 +0000 UTC m=+0.057464328 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:07:10 localhost systemd[1]: libpod-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope: Deactivated successfully. Dec 2 05:07:10 localhost podman[313871]: 2025-12-02 10:07:10.304386762 +0000 UTC m=+0.057360675 container died c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:10 localhost podman[313871]: 2025-12-02 10:07:10.338211462 +0000 UTC m=+0.091185335 container cleanup c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:07:10 localhost systemd[1]: libpod-conmon-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope: Deactivated successfully. Dec 2 05:07:10 localhost podman[313872]: 2025-12-02 10:07:10.37359415 +0000 UTC m=+0.122297062 container remove c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.424 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:10 localhost nova_compute[281045]: 2025-12-02 10:07:10.565 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:10 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:10.772 2 INFO neutron.agent.securitygroups_rpc [None req-f9cb0719-6f1e-4498-ae5c-3d1490c7cf9b c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']#033[00m Dec 2 05:07:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s Dec 2 05:07:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 e133: 6 total, 6 up, 6 in Dec 2 05:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:07:11 localhost systemd[1]: var-lib-containers-storage-overlay-6410f4a3cbc4419ce0d2d2f282ad5265db5a535e7cfdaa1501af2234dd5190fd-merged.mount: Deactivated successfully. Dec 2 05:07:11 localhost systemd[1]: run-netns-qdhcp\x2d0e36d60b\x2d6052\x2d4646\x2db258\x2d2b7e0612d401.mount: Deactivated successfully. Dec 2 05:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:07:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8408 writes, 34K keys, 8408 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8408 writes, 2192 syncs, 3.84 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3369 writes, 11K keys, 3369 commit groups, 1.0 writes per commit group, ingest: 11.49 MB, 0.02 MB/s#012Interval WAL: 3369 writes, 1442 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:07:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:07:11 localhost podman[313898]: 2025-12-02 10:07:11.345094427 +0000 UTC m=+0.098109979 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:07:11 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:11.355 2 INFO neutron.agent.securitygroups_rpc [None req-0dd50eb4-1108-4728-8f57-13a5878ac244 c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']#033[00m Dec 2 05:07:11 localhost systemd[1]: tmp-crun.GFI9zg.mount: Deactivated successfully. Dec 2 05:07:11 localhost podman[313898]: 2025-12-02 10:07:11.383062354 +0000 UTC m=+0.136077946 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Dec 2 05:07:11 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:07:11 localhost podman[313899]: 2025-12-02 10:07:11.385149048 +0000 UTC m=+0.134912700 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:11 localhost podman[313930]: 2025-12-02 10:07:11.450283021 +0000 UTC m=+0.093259959 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:07:11 localhost podman[313930]: 2025-12-02 10:07:11.461927019 +0000 UTC m=+0.104903977 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:07:11 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:07:11 localhost podman[313931]: 2025-12-02 10:07:11.510180473 +0000 UTC m=+0.151663235 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:07:11 localhost podman[313899]: 2025-12-02 10:07:11.518167828 +0000 UTC m=+0.267931510 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 05:07:11 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:07:11 localhost podman[313931]: 2025-12-02 10:07:11.575517432 +0000 UTC m=+0.217000214 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125) Dec 2 05:07:11 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:07:12 localhost openstack_network_exporter[241816]: ERROR 10:07:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:07:12 localhost openstack_network_exporter[241816]: ERROR 10:07:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:12 localhost openstack_network_exporter[241816]: ERROR 10:07:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:12 localhost openstack_network_exporter[241816]: ERROR 10:07:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:07:12 localhost openstack_network_exporter[241816]: Dec 2 05:07:12 localhost openstack_network_exporter[241816]: ERROR 10:07:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:07:12 localhost openstack_network_exporter[241816]: Dec 2 05:07:12 localhost nova_compute[281045]: 2025-12-02 10:07:12.304 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s Dec 2 05:07:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.0 KiB/s wr, 28 op/s Dec 2 05:07:15 localhost nova_compute[281045]: 2025-12-02 10:07:15.566 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:07:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 9909 writes, 40K keys, 9909 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9909 writes, 2429 syncs, 4.08 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4031 writes, 14K keys, 4031 commit groups, 1.0 writes per commit group, ingest: 14.71 MB, 0.02 MB/s#012Interval WAL: 4031 writes, 1640 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:07:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s Dec 2 05:07:17 localhost nova_compute[281045]: 2025-12-02 10:07:17.342 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:18.066 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:17Z, description=, device_id=adc10196-e9bc-4c45-94b4-e5bb526e2d9c, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0244b042-c6ca-4971-a425-6ec6d03f8746, ip_allocation=immediate, mac_address=fa:16:3e:58:38:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1579, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:17Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:18 localhost systemd[1]: tmp-crun.7PkrO5.mount: Deactivated successfully. Dec 2 05:07:18 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:18 localhost podman[314000]: 2025-12-02 10:07:18.286283764 +0000 UTC m=+0.069787607 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:18.563 262347 INFO neutron.agent.dhcp.agent [None req-0f9094a1-d847-4d51-a354-88a843aa1434 - - - - - -] DHCP configuration for ports {'0244b042-c6ca-4971-a425-6ec6d03f8746'} is completed#033[00m Dec 2 05:07:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Dec 2 05:07:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:20.069 2 INFO neutron.agent.securitygroups_rpc [None req-0ca3d260-f659-40d6-b699-f106118e6211 f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']#033[00m Dec 2 05:07:20 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:20.098 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:19Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b2c8ab4-2d26-4ee7-86fa-e39f4e601823, ip_allocation=immediate, mac_address=fa:16:3e:7a:be:2a, name=tempest-RoutersAdminNegativeTest-749451820, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=True, project_id=5ea39db037534e2087a54e8a052ad24e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['377ae0fe-81df-41e0-8ef6-1afd307f6beb'], standard_attr_id=1603, status=DOWN, tags=[], tenant_id=5ea39db037534e2087a54e8a052ad24e, updated_at=2025-12-02T10:07:19Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:20 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:20 localhost podman[314035]: 2025-12-02 10:07:20.30257032 +0000 UTC m=+0.050379610 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:07:20 localhost podman[314049]: 2025-12-02 10:07:20.39131365 +0000 UTC m=+0.069282512 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:07:20 localhost podman[314049]: 2025-12-02 10:07:20.397792169 +0000 UTC m=+0.075761031 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:07:20 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:07:20 localhost podman[314050]: 2025-12-02 10:07:20.436303463 +0000 UTC m=+0.111517921 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container) Dec 2 05:07:20 localhost podman[314050]: 2025-12-02 10:07:20.445828796 +0000 UTC m=+0.121043284 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350) Dec 2 05:07:20 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:07:20 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:20.529 262347 INFO neutron.agent.dhcp.agent [None req-c8c4c1a7-a245-42b3-817a-a671973e1771 - - - - - -] DHCP configuration for ports {'9b2c8ab4-2d26-4ee7-86fa-e39f4e601823'} is completed#033[00m Dec 2 05:07:20 localhost nova_compute[281045]: 2025-12-02 10:07:20.568 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:20 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:20 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:20 localhost podman[314111]: 2025-12-02 10:07:20.586809372 +0000 UTC m=+0.061693638 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Dec 2 05:07:21 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:21.634 2 INFO neutron.agent.securitygroups_rpc [None req-9c96eaf7-e1a3-4805-a32c-c883041fe7ca f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']#033[00m Dec 2 05:07:21 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:07:21 localhost podman[314150]: 2025-12-02 10:07:21.936527118 +0000 UTC m=+0.061232643 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:07:21 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:21 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:22.052 262347 INFO neutron.agent.dhcp.agent [None req-49a98392-4bb0-40ef-bc50-4aaeb85b17cc - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:21Z, description=, device_id=faa39e96-d7c3-48ec-b5b0-f4420251b339, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d178cd50-67bc-475e-bb5c-e4cf16815921, ip_allocation=immediate, mac_address=fa:16:3e:ba:a2:17, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1616, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:21Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:22 localhost systemd[1]: tmp-crun.q857O1.mount: Deactivated successfully. Dec 2 05:07:22 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:22 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:22 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:22 localhost podman[314223]: 2025-12-02 10:07:22.223405681 +0000 UTC m=+0.049321008 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:07:22 localhost nova_compute[281045]: 2025-12-02 10:07:22.380 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:22.514 262347 INFO neutron.agent.dhcp.agent [None req-dee9b15f-eb72-4d73-84a0-710aeff6d1eb - - - - - -] DHCP configuration for ports {'d178cd50-67bc-475e-bb5c-e4cf16815921'} is completed#033[00m Dec 2 05:07:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:07:22 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:07:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:07:22 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:07:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:07:22 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:07:22 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:07:22 localhost ceph-mgr[287188]: [progress INFO root] Completed event 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:07:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:07:22 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:07:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s Dec 2 05:07:23 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:07:23 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:07:23 localhost nova_compute[281045]: 2025-12-02 10:07:23.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s Dec 2 05:07:25 localhost nova_compute[281045]: 2025-12-02 10:07:25.571 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:25.683 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:25Z, description=, device_id=d23c300d-2106-463f-ba69-eebcc6860c57, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=07c2ad56-fdc6-41b4-a849-7660e9700481, ip_allocation=immediate, mac_address=fa:16:3e:f0:02:af, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1644, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:25 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:25 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:25 localhost podman[314310]: 2025-12-02 10:07:25.896632362 +0000 UTC m=+0.064911567 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:25 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.090 262347 INFO neutron.agent.dhcp.agent [None req-dd692575-9ba4-42be-abc0-fb5009267c2b - - - - - -] DHCP configuration for ports {'07c2ad56-fdc6-41b4-a849-7660e9700481'} is completed#033[00m Dec 2 05:07:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.317 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:25Z, description=, device_id=7bf2b4c4-6334-4a1f-8be8-1ca6d15e82eb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d001dae5-639b-449c-a35c-a7a5c458790f, ip_allocation=immediate, mac_address=fa:16:3e:9b:85:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1647, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:26 localhost podman[314348]: 2025-12-02 10:07:26.695901781 +0000 UTC m=+0.061181313 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:26 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:07:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:07:26 localhost podman[314361]: 2025-12-02 10:07:26.811909929 +0000 UTC m=+0.085702087 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible) Dec 2 05:07:26 localhost podman[314361]: 2025-12-02 10:07:26.830006115 +0000 UTC m=+0.103798283 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:07:26 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:07:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.958 262347 INFO neutron.agent.dhcp.agent [None req-191dab6e-2d15-44a9-90a0-31a31133bd5a - - - - - -] DHCP configuration for ports {'d001dae5-639b-449c-a35c-a7a5c458790f'} is completed#033[00m Dec 2 05:07:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s Dec 2 05:07:27 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:07:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:07:27 localhost nova_compute[281045]: 2025-12-02 10:07:27.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:07:28 localhost nova_compute[281045]: 2025-12-02 10:07:28.371 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:07:28 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:07:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:07:28 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:07:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s Dec 2 05:07:30 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:30 localhost podman[314402]: 2025-12-02 10:07:30.295654922 +0000 UTC m=+0.058513300 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:30 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:30 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:30 localhost nova_compute[281045]: 2025-12-02 10:07:30.572 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:30 localhost nova_compute[281045]: 2025-12-02 10:07:30.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s Dec 2 05:07:32 localhost nova_compute[281045]: 2025-12-02 10:07:32.410 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:32 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:32.907 2 INFO neutron.agent.securitygroups_rpc [None req-4959ff7d-aca7-46d0-9143-48c9e561106c c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']#033[00m Dec 2 05:07:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s Dec 2 05:07:33 localhost podman[239757]: time="2025-12-02T10:07:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:07:33 localhost podman[239757]: @ - - [02/Dec/2025:10:07:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:07:33 localhost podman[239757]: @ - - [02/Dec/2025:10:07:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19215 "" "Go-http-client/1.1" Dec 2 05:07:33 localhost systemd[1]: tmp-crun.iiXWdl.mount: Deactivated successfully. Dec 2 05:07:33 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:33 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:33 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:33 localhost podman[314440]: 2025-12-02 10:07:33.707865236 +0000 UTC m=+0.105175455 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:07:33 localhost nova_compute[281045]: 2025-12-02 10:07:33.882 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.018 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.019 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.021 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.022 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:07:34 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:34.798 2 INFO neutron.agent.securitygroups_rpc [None req-7ec4eb97-1d35-4cb1-ad23-be6283df01c3 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:34 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:34.838 262347 INFO neutron.agent.linux.ip_lib [None req-249a979f-df1d-417d-93da-1cfa4e879ae7 - - - - - -] Device tapc4e3a46c-de cannot be used as it has no MAC address#033[00m Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.859 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost kernel: device tapc4e3a46c-de entered promiscuous mode Dec 2 05:07:34 localhost NetworkManager[5967]: [1764670054.8674] manager: (tapc4e3a46c-de): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.867 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost ovn_controller[153778]: 2025-12-02T10:07:34Z|00136|binding|INFO|Claiming lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 for this chassis. Dec 2 05:07:34 localhost ovn_controller[153778]: 2025-12-02T10:07:34Z|00137|binding|INFO|c4e3a46c-de2f-4fec-815f-929a0c5cb506: Claiming unknown Dec 2 05:07:34 localhost systemd-udevd[314471]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.881 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c83c01183aba40c080a7dde4126b2e3b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3578eb-6671-47df-b07e-7931aa192d6f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4e3a46c-de2f-4fec-815f-929a0c5cb506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.883 159483 INFO neutron.agent.ovn.metadata.agent [-] Port c4e3a46c-de2f-4fec-815f-929a0c5cb506 in datapath f20ca235-57e0-46f5-9e44-df634db1299f bound to our chassis#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.885 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f20ca235-57e0-46f5-9e44-df634db1299f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:34.887 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8102d604-9491-4e1d-8fd0-2a4137c4efbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.902 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost ovn_controller[153778]: 2025-12-02T10:07:34Z|00138|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 ovn-installed in OVS Dec 2 05:07:34 localhost ovn_controller[153778]: 2025-12-02T10:07:34Z|00139|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 up in Southbound Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost journal[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.950 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost nova_compute[281045]: 2025-12-02 10:07:34.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:35 localhost nova_compute[281045]: 2025-12-02 10:07:35.002 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s Dec 2 05:07:35 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:35.311 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:34Z, description=, device_id=ee41f236-3144-44c4-a93c-6155ab400908, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=446070b9-f224-43e5-ab20-4688e138137b, ip_allocation=immediate, mac_address=fa:16:3e:98:f8:25, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1701, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:34Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:35 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:35 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:35 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:35 localhost podman[314537]: 2025-12-02 10:07:35.500130723 +0000 UTC m=+0.046331065 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:35 localhost nova_compute[281045]: 2025-12-02 10:07:35.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:35 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:35.726 2 INFO neutron.agent.securitygroups_rpc [None req-87d4804a-2e84-429a-b45c-6794fadb1faa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:35 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:35.761 262347 INFO neutron.agent.dhcp.agent [None req-722fce03-82e4-4640-bc71-c0e988c059da - - - - - -] DHCP configuration for ports {'446070b9-f224-43e5-ab20-4688e138137b'} is completed#033[00m Dec 2 05:07:35 localhost podman[314580]: Dec 2 05:07:35 localhost podman[314580]: 2025-12-02 10:07:35.80897176 +0000 UTC m=+0.071316323 container create 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:35 localhost systemd[1]: Started libpod-conmon-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope. Dec 2 05:07:35 localhost podman[314580]: 2025-12-02 10:07:35.770694334 +0000 UTC m=+0.033038867 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:35 localhost systemd[1]: Started libcrun container. Dec 2 05:07:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebe0b74ba4dda832e5b3490b63546e986a845c8fa34d1b6f07fa0ac860c1a3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:35 localhost podman[314580]: 2025-12-02 10:07:35.885798854 +0000 UTC m=+0.148143377 container init 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:35 localhost podman[314580]: 2025-12-02 10:07:35.895163471 +0000 UTC m=+0.157508004 container start 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:35 localhost dnsmasq[314598]: started, version 2.85 cachesize 150 Dec 2 05:07:35 localhost dnsmasq[314598]: DNS service limited to local subnets Dec 2 05:07:35 localhost dnsmasq[314598]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:35 localhost dnsmasq[314598]: warning: no upstream servers configured Dec 2 05:07:35 localhost dnsmasq-dhcp[314598]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:07:35 localhost dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 0 addresses Dec 2 05:07:35 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host Dec 2 05:07:35 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts Dec 2 05:07:36 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.012 262347 INFO neutron.agent.dhcp.agent [None req-d6d5210e-70fd-4811-ae62-610fcc48966e - - - - - -] DHCP configuration for ports {'bcca0f1d-63c7-4ef3-835f-3a1c95a17a98'} is completed#033[00m Dec 2 05:07:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:36.384 2 INFO neutron.agent.securitygroups_rpc [None req-d47733e1-0ad6-43a1-b5b9-ff46ea82484c 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']#033[00m Dec 2 05:07:36 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.435 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ed4e0a2d-2b7b-4ec8-8de0-8664040b563f, ip_allocation=immediate, mac_address=fa:16:3e:1c:8a:ed, name=tempest-TagsExtTest-1556791831, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:32Z, description=, dns_domain=, id=f20ca235-57e0-46f5-9e44-df634db1299f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1118069821, port_security_enabled=True, project_id=c83c01183aba40c080a7dde4126b2e3b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28317, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1683, status=ACTIVE, subnets=['11a5f377-3c38-4f51-b0a0-6917a8d4f7db'], tags=[], tenant_id=c83c01183aba40c080a7dde4126b2e3b, updated_at=2025-12-02T10:07:33Z, vlan_transparent=None, network_id=f20ca235-57e0-46f5-9e44-df634db1299f, port_security_enabled=True, project_id=c83c01183aba40c080a7dde4126b2e3b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['8d157c15-6c1c-467c-9dbb-a97c83d265b6'], standard_attr_id=1704, status=DOWN, tags=[], tenant_id=c83c01183aba40c080a7dde4126b2e3b, updated_at=2025-12-02T10:07:36Z on network f20ca235-57e0-46f5-9e44-df634db1299f#033[00m Dec 2 05:07:36 localhost dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 1 addresses Dec 2 05:07:36 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host Dec 2 05:07:36 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts Dec 2 05:07:36 localhost podman[314615]: 2025-12-02 10:07:36.645849397 +0000 UTC m=+0.058707157 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:07:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:07:36 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.969 262347 INFO neutron.agent.dhcp.agent [None req-59d6016e-b18a-434e-83ed-d551979c4e16 - - - - - -] DHCP configuration for ports {'ed4e0a2d-2b7b-4ec8-8de0-8664040b563f'} is completed#033[00m Dec 2 05:07:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 853 B/s wr, 27 op/s Dec 2 05:07:37 localhost nova_compute[281045]: 2025-12-02 10:07:37.442 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:38 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:38.978 2 INFO neutron.agent.securitygroups_rpc [None req-c850bb74-fc0e-4a51-908b-f066332bb7ea 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s Dec 2 05:07:40 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:40.176 2 INFO neutron.agent.securitygroups_rpc [None req-7bcf89da-35a6-4e58-8ea7-d94146bd4928 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:40 localhost nova_compute[281045]: 2025-12-02 10:07:40.576 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:41 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:41 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:41 localhost podman[314653]: 2025-12-02 10:07:41.022541341 +0000 UTC m=+0.058529800 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:07:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s Dec 2 05:07:41 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:41.079 2 INFO neutron.agent.securitygroups_rpc [None req-5e11d272-dc8e-4f26-afbf-45da4f1c93dd c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']#033[00m Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.088268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061088311, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1960, "num_deletes": 267, "total_data_size": 2547812, "memory_usage": 2596256, "flush_reason": "Manual Compaction"} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061102020, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1653647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19541, "largest_seqno": 21496, "table_properties": {"data_size": 1646223, "index_size": 4318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16375, "raw_average_key_size": 20, "raw_value_size": 1630923, "raw_average_value_size": 2054, "num_data_blocks": 189, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669940, "oldest_key_time": 1764669940, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13816 microseconds, and 6873 cpu microseconds. Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.102079) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1653647 bytes OK Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.102107) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104379) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104409) EVENT_LOG_v1 {"time_micros": 1764670061104401, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104431) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2538673, prev total WAL file size 2539422, number of live WAL files 2. Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.105437) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303138' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end) Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1614KB)], [27(15MB)] Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061105519, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18292374, "oldest_snapshot_seqno": -1} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12679 keys, 17929036 bytes, temperature: kUnknown Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061238104, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17929036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17854269, "index_size": 41962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31749, "raw_key_size": 339656, "raw_average_key_size": 26, "raw_value_size": 17635765, "raw_average_value_size": 1390, "num_data_blocks": 1601, "num_entries": 12679, "num_filter_entries": 12679, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.238632) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17929036 bytes Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.240479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.7 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 15.9 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(21.9) write-amplify(10.8) OK, records in: 13225, records dropped: 546 output_compression: NoCompression Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.240512) EVENT_LOG_v1 {"time_micros": 1764670061240499, "job": 14, "event": "compaction_finished", "compaction_time_micros": 132831, "compaction_time_cpu_micros": 46134, "output_level": 6, "num_output_files": 1, "total_output_size": 17929036, "num_input_records": 13225, "num_output_records": 12679, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061241397, "job": 14, "event": "table_file_deletion", "file_number": 29} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061243810, "job": 14, "event": "table_file_deletion", "file_number": 27} Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.105385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:07:42 localhost podman[314675]: 2025-12-02 10:07:42.087652986 +0000 UTC m=+0.087890894 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:07:42 localhost openstack_network_exporter[241816]: ERROR 10:07:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:07:42 localhost openstack_network_exporter[241816]: ERROR 10:07:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:42 localhost openstack_network_exporter[241816]: ERROR 10:07:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:42 localhost podman[314675]: 2025-12-02 10:07:42.12584709 +0000 UTC m=+0.126084978 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:07:42 localhost openstack_network_exporter[241816]: ERROR 10:07:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:07:42 localhost openstack_network_exporter[241816]: Dec 2 05:07:42 localhost openstack_network_exporter[241816]: ERROR 10:07:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:07:42 localhost openstack_network_exporter[241816]: Dec 2 05:07:42 localhost podman[314676]: 2025-12-02 10:07:42.06892591 +0000 UTC m=+0.069198919 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Dec 2 05:07:42 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:07:42 localhost podman[314674]: 2025-12-02 10:07:42.167477961 +0000 UTC m=+0.170434293 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:07:42 localhost podman[314676]: 2025-12-02 10:07:42.176944162 +0000 UTC m=+0.177217171 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:07:42 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:07:42 localhost podman[314682]: 2025-12-02 10:07:42.254694253 +0000 UTC m=+0.245184471 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:07:42 localhost podman[314674]: 2025-12-02 10:07:42.277302158 +0000 UTC m=+0.280258540 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:42 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:07:42 localhost podman[314682]: 2025-12-02 10:07:42.31572948 +0000 UTC m=+0.306219698 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:42 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:07:42 localhost nova_compute[281045]: 2025-12-02 10:07:42.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:42 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:42.807 2 INFO neutron.agent.securitygroups_rpc [None req-d64f03a9-f848-43c0-ae5d-2c025d3e76ac 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s Dec 2 05:07:43 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:43.410 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:42Z, description=, device_id=28981963-6c8e-4dd9-bb45-3615e25a3e05, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1fb4aa21-b66c-43e1-8729-0b136cb670d6, ip_allocation=immediate, mac_address=fa:16:3e:f8:b2:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1744, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:42Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:43 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:43 localhost podman[314775]: 2025-12-02 10:07:43.593293328 +0000 UTC m=+0.030897211 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:07:43 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:43.932 262347 INFO neutron.agent.dhcp.agent [None req-fee1ccf2-56aa-40be-8f35-6cfb8256a1b1 - - - - - -] DHCP configuration for ports {'1fb4aa21-b66c-43e1-8729-0b136cb670d6'} is completed#033[00m Dec 2 05:07:44 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:44.321 2 INFO neutron.agent.securitygroups_rpc [None req-4929d793-e537-4c53-a2f5-ddc0b60f2500 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:44 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta.tmp' Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta.tmp' to config b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta' Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:45 localhost systemd[1]: tmp-crun.Rsq8UK.mount: Deactivated successfully. Dec 2 05:07:45 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:45 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:45 localhost podman[314828]: 2025-12-02 10:07:45.013320628 +0000 UTC m=+0.063500444 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:45 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Dec 2 05:07:45 localhost nova_compute[281045]: 2025-12-02 10:07:45.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:46 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:46.987 2 INFO neutron.agent.securitygroups_rpc [None req-771b159d-2ba2-4111-b13e-47ca58a8e2e2 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']#033[00m Dec 2 05:07:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Dec 2 05:07:47 localhost dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 0 addresses Dec 2 05:07:47 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host Dec 2 05:07:47 localhost dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts Dec 2 05:07:47 localhost podman[314865]: 2025-12-02 10:07:47.198815237 +0000 UTC m=+0.050780473 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:47 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:47.427 2 INFO neutron.agent.securitygroups_rpc [None req-fa6ee8ca-ed1b-4c8f-b78c-b44d9f9936bd 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:47 localhost nova_compute[281045]: 2025-12-02 10:07:47.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost dnsmasq[314598]: exiting on receipt of SIGTERM Dec 2 05:07:47 localhost podman[314900]: 2025-12-02 10:07:47.797841379 +0000 UTC m=+0.062457762 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:47 localhost systemd[1]: libpod-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope: Deactivated successfully. Dec 2 05:07:47 localhost podman[314920]: 2025-12-02 10:07:47.863074544 +0000 UTC m=+0.043018643 container died 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:47 localhost systemd[1]: tmp-crun.B9gNmU.mount: Deactivated successfully. Dec 2 05:07:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:47 localhost podman[314920]: 2025-12-02 10:07:47.907496561 +0000 UTC m=+0.087440640 container remove 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:07:47 localhost ovn_controller[153778]: 2025-12-02T10:07:47Z|00140|binding|INFO|Releasing lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 from this chassis (sb_readonly=0) Dec 2 05:07:47 localhost ovn_controller[153778]: 2025-12-02T10:07:47Z|00141|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 down in Southbound Dec 2 05:07:47 localhost kernel: device tapc4e3a46c-de left promiscuous mode Dec 2 05:07:47 localhost nova_compute[281045]: 2025-12-02 10:07:47.918 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:47.926 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c83c01183aba40c080a7dde4126b2e3b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3578eb-6671-47df-b07e-7931aa192d6f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4e3a46c-de2f-4fec-815f-929a0c5cb506) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:47.928 159483 INFO neutron.agent.ovn.metadata.agent [-] Port c4e3a46c-de2f-4fec-815f-929a0c5cb506 in datapath f20ca235-57e0-46f5-9e44-df634db1299f unbound from our chassis#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:47.931 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f20ca235-57e0-46f5-9e44-df634db1299f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:07:47.932 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e1942d95-7f5a-4efe-af96-265ce71d9dca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:47 localhost nova_compute[281045]: 2025-12-02 10:07:47.941 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost systemd[1]: libpod-conmon-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope: Deactivated successfully. Dec 2 05:07:47 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:47.968 262347 INFO neutron.agent.dhcp.agent [None req-f31f1937-7b09-4696-81ad-787f3f41f226 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:48.177 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:48 localhost systemd[1]: var-lib-containers-storage-overlay-2ebe0b74ba4dda832e5b3490b63546e986a845c8fa34d1b6f07fa0ac860c1a3a-merged.mount: Deactivated successfully. Dec 2 05:07:48 localhost systemd[1]: run-netns-qdhcp\x2df20ca235\x2d57e0\x2d46f5\x2d9e44\x2ddf634db1299f.mount: Deactivated successfully. Dec 2 05:07:48 localhost nova_compute[281045]: 2025-12-02 10:07:48.500 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:48 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:07:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:48 localhost podman[314956]: 2025-12-02 10:07:48.521554074 +0000 UTC m=+0.044923642 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:07:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 15 op/s Dec 2 05:07:49 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:49.110 2 INFO neutron.agent.securitygroups_rpc [None req-1c456783-7537-497f-860f-91e236f22124 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:07:49 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57640910-43d5-4fb7-b9cf-1d15a1cbc8ab' of type subvolume Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.198+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57640910-43d5-4fb7-b9cf-1d15a1cbc8ab' of type subvolume Dec 2 05:07:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "force": true, "format": "json"}]: dispatch Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab'' moved to trashcan Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:07:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < "" Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:07:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:50.456 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:50Z, description=, device_id=78d092c4-a185-413f-9411-6829a90d534a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=01ac7708-8593-4476-a708-c9acb0f1f95f, ip_allocation=immediate, mac_address=fa:16:3e:8b:f8:eb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1769, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:50Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:50 localhost nova_compute[281045]: 2025-12-02 10:07:50.582 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:50 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:50 localhost podman[315016]: 2025-12-02 10:07:50.653335342 +0000 UTC m=+0.058409798 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:07:50 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:50 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:07:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:07:50 localhost podman[315029]: 2025-12-02 10:07:50.775084817 +0000 UTC m=+0.084706417 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:07:50 localhost podman[315029]: 2025-12-02 10:07:50.794802033 +0000 UTC m=+0.104423623 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:07:50 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:07:50 localhost podman[315031]: 2025-12-02 10:07:50.880429235 +0000 UTC m=+0.186211377 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc.) Dec 2 05:07:50 localhost podman[315031]: 2025-12-02 10:07:50.896871481 +0000 UTC m=+0.202653693 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git) Dec 2 05:07:50 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:07:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:50.962 262347 INFO neutron.agent.dhcp.agent [None req-700cc9f1-1174-4a17-8173-8a657a7233c9 - - - - - -] DHCP configuration for ports {'01ac7708-8593-4476-a708-c9acb0f1f95f'} is completed#033[00m Dec 2 05:07:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 2.6 KiB/s wr, 0 op/s Dec 2 05:07:51 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:51.564 2 INFO neutron.agent.securitygroups_rpc [None req-3a04bc57-bfc9-42ed-a239-801c8326e405 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:52 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:52.379 2 INFO neutron.agent.securitygroups_rpc [None req-b88c63e0-efad-4ee2-bdbd-ab6bd93ed0e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:52 localhost nova_compute[281045]: 2025-12-02 10:07:52.486 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s Dec 2 05:07:53 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:07:53 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:53 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:53 localhost podman[315094]: 2025-12-02 10:07:53.463579204 +0000 UTC m=+0.048574225 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:54.022 2 INFO neutron.agent.securitygroups_rpc [None req-1b0d4d6e-60bd-47bc-abae-8825e5c440ec 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:54.810 2 INFO neutron.agent.securitygroups_rpc [None req-f2f34722-a858-417d-bb9f-16583a7fb9bc 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:54.920 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:54Z, description=, device_id=bd3a398a-f17b-4e2f-8103-70e4ec91527f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a5e3f6a7-8a59-48f3-a0a7-6a9123ac18db, ip_allocation=immediate, mac_address=fa:16:3e:90:c2:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1799, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:54Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s Dec 2 05:07:55 localhost systemd[1]: tmp-crun.3i6Cp2.mount: Deactivated successfully. Dec 2 05:07:55 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:55 localhost podman[315130]: 2025-12-02 10:07:55.148653074 +0000 UTC m=+0.078530846 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:07:55 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:55 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:55.325 262347 INFO neutron.agent.dhcp.agent [None req-70b55b8b-cc49-4fd8-980d-3f9c95a496dc - - - - - -] DHCP configuration for ports {'a5e3f6a7-8a59-48f3-a0a7-6a9123ac18db'} is completed#033[00m Dec 2 05:07:55 localhost nova_compute[281045]: 2025-12-02 10:07:55.585 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:07:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s Dec 2 05:07:57 localhost systemd[1]: tmp-crun.lhs1w0.mount: Deactivated successfully. Dec 2 05:07:57 localhost podman[315151]: 2025-12-02 10:07:57.088161099 +0000 UTC m=+0.091793995 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:07:57 localhost podman[315151]: 2025-12-02 10:07:57.09959751 +0000 UTC m=+0.103230406 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:07:57 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:07:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:57 localhost nova_compute[281045]: 2025-12-02 10:07:57.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:57.906 2 INFO neutron.agent.securitygroups_rpc [None req-2105c7c1-c7a9-4dc4-9a73-811f6d407872 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:58 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:07:58 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:58 localhost podman[315188]: 2025-12-02 10:07:58.462947147 +0000 UTC m=+0.067834797 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:58 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.2 KiB/s wr, 2 op/s Dec 2 05:07:59 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.235 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:58Z, description=, device_id=e7c99296-9174-4aa1-8a50-fbf38ad25ab2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c4605c3a-263f-4351-968e-4bd2c4b38f47, ip_allocation=immediate, mac_address=fa:16:3e:59:89:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1849, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:59Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:59 localhost podman[315227]: 2025-12-02 10:07:59.427878961 +0000 UTC m=+0.050481374 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:07:59 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:07:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:07:59 localhost neutron_sriov_agent[255428]: 2025-12-02 10:07:59.456 2 INFO neutron.agent.securitygroups_rpc [None req-abbcf6d8-096d-46e2-96f3-3a8543ab77e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:59 localhost nova_compute[281045]: 2025-12-02 10:07:59.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:59 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.693 262347 INFO neutron.agent.dhcp.agent [None req-e4df9d5d-7f0a-47f5-b99b-75a474f9ba99 - - - - - -] DHCP configuration for ports {'c4605c3a-263f-4351-968e-4bd2c4b38f47'} is completed#033[00m Dec 2 05:07:59 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.734 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:59Z, description=, device_id=6e07310c-661c-4e65-99ba-3cf8abb9265e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80ff45a8-fbd8-425c-9b74-a117d6b2b0b7, ip_allocation=immediate, mac_address=fa:16:3e:09:9b:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1851, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:59Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:07:59 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:07:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:07:59 localhost podman[315266]: 2025-12-02 10:07:59.950686789 +0000 UTC m=+0.064398902 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:00 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:00.267 262347 INFO neutron.agent.dhcp.agent [None req-e2b2850a-fbe7-4f1c-95df-aae417ce2aed - - - - - -] DHCP configuration for ports {'80ff45a8-fbd8-425c-9b74-a117d6b2b0b7'} is completed#033[00m Dec 2 05:08:00 localhost nova_compute[281045]: 2025-12-02 10:08:00.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:00 localhost nova_compute[281045]: 2025-12-02 10:08:00.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 1 op/s Dec 2 05:08:01 localhost nova_compute[281045]: 2025-12-02 10:08:01.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e134 e134: 6 total, 6 up, 6 in Dec 2 05:08:02 localhost nova_compute[281045]: 2025-12-02 10:08:02.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:02 localhost systemd[1]: tmp-crun.cne5GG.mount: Deactivated successfully. Dec 2 05:08:02 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:08:02 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:02 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:02 localhost podman[315303]: 2025-12-02 10:08:02.599665362 +0000 UTC m=+0.099596105 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:08:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 3.0 KiB/s wr, 20 op/s Dec 2 05:08:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.552 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:08:03 localhost nova_compute[281045]: 2025-12-02 10:08:03.552 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:08:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e135 e135: 6 total, 6 up, 6 in Dec 2 05:08:03 localhost podman[239757]: time="2025-12-02T10:08:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:08:03 localhost podman[239757]: @ - - [02/Dec/2025:10:08:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:08:03 localhost podman[239757]: @ - - [02/Dec/2025:10:08:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19204 "" "Go-http-client/1.1" Dec 2 05:08:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:08:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2242023877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.032 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.270 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.273 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11546MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.273 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:08:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:08:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:08:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:08:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:08:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e136 e136: 6 total, 6 up, 6 in Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.648 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.649 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:08:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:04.777 2 INFO neutron.agent.securitygroups_rpc [None req-22f3ee62-f7aa-4000-8792-d140ffb54960 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']#033[00m Dec 2 05:08:04 localhost nova_compute[281045]: 2025-12-02 10:08:04.975 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:08:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 4.2 KiB/s wr, 33 op/s Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.670 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.671 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.686 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.713 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:08:05 localhost nova_compute[281045]: 2025-12-02 10:08:05.747 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:08:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:06.102 2 INFO neutron.agent.securitygroups_rpc [None req-c8dc5996-311b-454a-bef8-be44e05069d7 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']#033[00m Dec 2 05:08:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:06.139 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:08:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2347714614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.218 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.225 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.252 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.254 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.255 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.256 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.256 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.277 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.278 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:06 localhost nova_compute[281045]: 2025-12-02 10:08:06.278 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:08:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e137 e137: 6 total, 6 up, 6 in Dec 2 05:08:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:08:06 Dec 2 05:08:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:08:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:08:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'manila_data', '.mgr', 'backups', 'manila_metadata', 'volumes', 'images'] Dec 2 05:08:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 5.5 KiB/s wr, 44 op/s Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001633056776940257 quantized to 32 (current 32) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430047372278057 of space, bias 1.0, pg target 0.8586612533151871 quantized to 32 (current 32) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:08:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.815762841987716e-06 of space, bias 4.0, pg target 0.005425347222222221 quantized to 16 (current 16) Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:08:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.315 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.316 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.316 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.329 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.330 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta.tmp' Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta.tmp' to config b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta' Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:07.528 2 INFO neutron.agent.securitygroups_rpc [None req-99b1e585-32ae-4cc8-9a4d-b88a12900723 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:07 localhost nova_compute[281045]: 2025-12-02 10:08:07.572 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e138 e138: 6 total, 6 up, 6 in Dec 2 05:08:08 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:08.755 2 INFO neutron.agent.securitygroups_rpc [None req-c7a539d4-2f79-4a17-aaa4-1046dc1167cd b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:08.999 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 18 KiB/s wr, 178 op/s Dec 2 05:08:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e139 e139: 6 total, 6 up, 6 in Dec 2 05:08:09 localhost nova_compute[281045]: 2025-12-02 10:08:09.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:09 localhost nova_compute[281045]: 2025-12-02 10:08:09.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:08:10 localhost nova_compute[281045]: 2025-12-02 10:08:10.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:10.645 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:08:10 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:08:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:08:10 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:08:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 16 KiB/s wr, 163 op/s Dec 2 05:08:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta' Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e140 e140: 6 total, 6 up, 6 in Dec 2 05:08:12 localhost openstack_network_exporter[241816]: ERROR 10:08:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:08:12 localhost openstack_network_exporter[241816]: ERROR 10:08:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:12 localhost openstack_network_exporter[241816]: ERROR 10:08:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:12 localhost openstack_network_exporter[241816]: ERROR 10:08:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:08:12 localhost openstack_network_exporter[241816]: Dec 2 05:08:12 localhost openstack_network_exporter[241816]: ERROR 10:08:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:08:12 localhost openstack_network_exporter[241816]: Dec 2 05:08:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:12 localhost nova_compute[281045]: 2025-12-02 10:08:12.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:08:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:08:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:08:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:08:13 localhost podman[315368]: 2025-12-02 10:08:13.057520588 +0000 UTC m=+0.060918813 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:08:13 localhost podman[315368]: 2025-12-02 10:08:13.065808433 +0000 UTC m=+0.069206628 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 216 KiB/s rd, 30 KiB/s wr, 298 op/s Dec 2 05:08:13 localhost systemd[1]: tmp-crun.MWISR6.mount: Deactivated successfully. Dec 2 05:08:13 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:08:13 localhost podman[315392]: 2025-12-02 10:08:13.131197102 +0000 UTC m=+0.089154621 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller) Dec 2 05:08:13 localhost podman[315369]: 2025-12-02 10:08:13.164437763 +0000 UTC m=+0.161444542 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:08:13 localhost podman[315369]: 2025-12-02 10:08:13.17573055 +0000 UTC m=+0.172737379 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:08:13 localhost podman[315392]: 2025-12-02 10:08:13.181841578 +0000 UTC m=+0.139799097 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 05:08:13 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:08:13 localhost podman[315389]: 2025-12-02 10:08:13.098127646 +0000 UTC m=+0.062530612 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Dec 2 05:08:13 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:08:13 localhost podman[315389]: 2025-12-02 10:08:13.22876897 +0000 UTC m=+0.193171886 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:13 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:08:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e141 e141: 6 total, 6 up, 6 in Dec 2 05:08:14 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:14.142 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:13Z, description=, device_id=fa4c36f4-387f-4161-9755-5d4f05fecaf8, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dd85ff93-7a22-4f13-8b46-aaa1f9cacdaa, ip_allocation=immediate, mac_address=fa:16:3e:ad:e0:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1918, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:13Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:14 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:14 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:14 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:14 localhost podman[315468]: 2025-12-02 10:08:14.366428275 +0000 UTC m=+0.062390888 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:08:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e142 e142: 6 total, 6 up, 6 in Dec 2 05:08:14 localhost nova_compute[281045]: 2025-12-02 10:08:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "format": "json"}]: dispatch Dec 2 05:08:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:14 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:14.611 262347 INFO neutron.agent.dhcp.agent [None req-88192ab6-62ca-48db-8a74-d2e74f2dd84c - - - - - -] DHCP configuration for ports {'dd85ff93-7a22-4f13-8b46-aaa1f9cacdaa'} is completed#033[00m Dec 2 05:08:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 141 op/s Dec 2 05:08:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e143 e143: 6 total, 6 up, 6 in Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:15 localhost nova_compute[281045]: 2025-12-02 10:08:15.632 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e144 e144: 6 total, 6 up, 6 in Dec 2 05:08:16 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:08:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:16 localhost podman[315506]: 2025-12-02 10:08:16.794220013 +0000 UTC m=+0.055638360 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail Dec 2 05:08:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e145 e145: 6 total, 6 up, 6 in Dec 2 05:08:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:17 localhost nova_compute[281045]: 2025-12-02 10:08:17.577 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:18.372 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:17Z, description=, device_id=39bd4c60-e6fd-4810-b3aa-833809eba7cb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=817538ad-ee0b-4af8-be7e-37d998560f02, ip_allocation=immediate, mac_address=fa:16:3e:63:92:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1928, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:18Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:18 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:18 localhost podman[315544]: 2025-12-02 10:08:18.581971174 +0000 UTC m=+0.055206367 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:18.821 262347 INFO neutron.agent.dhcp.agent [None req-09dd9778-8466-4385-b370-97ba4ba93c55 - - - - - -] DHCP configuration for ports {'817538ad-ee0b-4af8-be7e-37d998560f02'} is completed#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:18.904 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:18.906 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:18.908 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:18.909 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c2dc4a-21d5-4ccc-a36a-544189da9972]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 183 KiB/s rd, 21 KiB/s wr, 256 op/s Dec 2 05:08:19 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:19.295 2 INFO neutron.agent.securitygroups_rpc [None req-2bfb9ebd-1846-44bd-b2e4-1f309ec769c2 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:19 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:19.335 2 INFO neutron.agent.securitygroups_rpc [None req-a38d4309-d6ec-4127-b224-040aeb412100 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:20.054 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:20.425 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:20 localhost nova_compute[281045]: 2025-12-02 10:08:20.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:20 localhost nova_compute[281045]: 2025-12-02 10:08:20.634 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:20.646 2 INFO neutron.agent.securitygroups_rpc [None req-87212674-2d83-471b-8535-396909b240c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:08:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864", "force": true, "format": "json"}]: dispatch Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta' Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "force": true, "format": "json"}]: dispatch Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 143 KiB/s rd, 16 KiB/s wr, 199 op/s Dec 2 05:08:21 localhost systemd[1]: tmp-crun.BLY1Yw.mount: Deactivated successfully. Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta' Dec 2 05:08:21 localhost podman[315565]: 2025-12-02 10:08:21.095179606 +0000 UTC m=+0.098025863 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:08:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:21 localhost podman[315566]: 2025-12-02 10:08:21.153855129 +0000 UTC m=+0.150662350 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Dec 2 05:08:21 localhost podman[315565]: 2025-12-02 10:08:21.18153193 +0000 UTC m=+0.184378197 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:08:21 localhost podman[315566]: 2025-12-02 10:08:21.193990232 +0000 UTC m=+0.190797453 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41) Dec 2 05:08:21 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:08:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e146 e146: 6 total, 6 up, 6 in Dec 2 05:08:21 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.240547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101240611, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 908, "num_deletes": 258, "total_data_size": 1857668, "memory_usage": 1876384, "flush_reason": "Manual Compaction"} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101254089, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1224963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21501, "largest_seqno": 22404, "table_properties": {"data_size": 1220813, "index_size": 1877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10165, "raw_average_key_size": 21, "raw_value_size": 1212225, "raw_average_value_size": 2509, "num_data_blocks": 82, "num_entries": 483, "num_filter_entries": 483, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670061, "oldest_key_time": 1764670061, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13597 microseconds, and 5869 cpu microseconds. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.254145) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1224963 bytes OK Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.254170) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256028) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256079) EVENT_LOG_v1 {"time_micros": 1764670101256044, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1852842, prev total WAL file size 1852842, number of live WAL files 2. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1196KB)], [30(17MB)] Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101256808, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19153999, "oldest_snapshot_seqno": -1} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12630 keys, 17217640 bytes, temperature: kUnknown Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101336176, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17217640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17143771, "index_size": 41192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31621, "raw_key_size": 339323, "raw_average_key_size": 26, "raw_value_size": 16926659, "raw_average_value_size": 1340, "num_data_blocks": 1564, "num_entries": 12630, "num_filter_entries": 12630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.336507) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17217640 bytes Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.340169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.1 rd, 216.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.1 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(29.7) write-amplify(14.1) OK, records in: 13162, records dropped: 532 output_compression: NoCompression Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.340201) EVENT_LOG_v1 {"time_micros": 1764670101340186, "job": 16, "event": "compaction_finished", "compaction_time_micros": 79449, "compaction_time_cpu_micros": 33043, "output_level": 6, "num_output_files": 1, "total_output_size": 17217640, "num_input_records": 13162, "num_output_records": 12630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101340543, "job": 16, "event": "table_file_deletion", "file_number": 32} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101343424, "job": 16, "event": "table_file_deletion", "file_number": 30} Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:21.764 2 INFO neutron.agent.securitygroups_rpc [None req-7754a6d4-074d-4d03-86b1-db3804b94ab5 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:22 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:22.307 2 INFO neutron.agent.securitygroups_rpc [None req-a959ed63-fb01-427b-9973-6a88ead4c1cf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:08:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2121 writes, 22K keys, 2121 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.06 MB/s#012Cumulative WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2121 writes, 22K keys, 2121 commit groups, 1.0 writes per commit group, ingest: 35.72 MB, 0.06 MB/s#012Interval WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 130.6 0.20 0.07 8 0.025 0 0 0.0 0.0#012 L6 1/0 16.42 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 4.5 162.5 150.2 0.78 0.31 7 0.112 89K 3465 0.0 0.0#012 Sum 1/0 16.42 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.5 129.4 146.2 0.98 0.38 15 0.065 89K 3465 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.5 129.8 146.6 0.98 0.38 14 0.070 89K 3465 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 0.0 162.5 150.2 0.78 0.31 7 0.112 89K 3465 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 132.2 0.20 0.07 7 0.028 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.025, interval 0.025#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.24 MB/s write, 0.12 GB read, 0.21 MB/s read, 1.0 seconds#012Interval compaction: 0.14 GB write, 0.24 MB/s write, 0.12 GB read, 0.21 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 308.00 MB usage: 11.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(543,11.20 MB,3.63551%) FilterBlock(15,266.67 KB,0.0845525%) IndexBlock(15,355.95 KB,0.112861%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 05:08:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:22.350 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:21Z, description=, device_id=360ff237-df4a-410c-985b-04b10ed3866a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=34861965-833c-44c1-9b9e-024a5e7ba046, ip_allocation=immediate, mac_address=fa:16:3e:43:ac:7f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1954, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e147 e147: 6 total, 6 up, 6 in Dec 2 05:08:22 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:08:22 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:22 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:22 localhost podman[315626]: 2025-12-02 10:08:22.538183605 +0000 UTC m=+0.055119715 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:08:22 localhost nova_compute[281045]: 2025-12-02 10:08:22.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:22.775 262347 INFO neutron.agent.dhcp.agent [None req-b1c13362-4aea-4376-a3a1-260b5e8a1feb - - - - - -] DHCP configuration for ports {'34861965-833c-44c1-9b9e-024a5e7ba046'} is completed#033[00m Dec 2 05:08:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 162 KiB/s rd, 25 KiB/s wr, 228 op/s Dec 2 05:08:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e148 e148: 6 total, 6 up, 6 in Dec 2 05:08:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:08:24 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:08:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:08:24 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:08:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:08:24 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:08:24 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:08:24 localhost ceph-mgr[287188]: [progress INFO root] Completed event 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:08:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:08:24 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:08:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa7c0661-ed15-4711-8a85-f361d992598b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa7c0661-ed15-4711-8a85-f361d992598b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:24 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7c0661-ed15-4711-8a85-f361d992598b' of type subvolume Dec 2 05:08:24 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:24.303+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7c0661-ed15-4711-8a85-f361d992598b' of type subvolume Dec 2 05:08:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "force": true, "format": "json"}]: dispatch Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b'' moved to trashcan Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:08:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < "" Dec 2 05:08:24 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:08:24 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:08:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e149 e149: 6 total, 6 up, 6 in Dec 2 05:08:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:24.636 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:24.638 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:24.640 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:24.641 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b397f820-8c50-427a-ba9b-8a07f006834a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:24 localhost nova_compute[281045]: 2025-12-02 10:08:24.736 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 13 KiB/s wr, 42 op/s Dec 2 05:08:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e150 e150: 6 total, 6 up, 6 in Dec 2 05:08:25 localhost nova_compute[281045]: 2025-12-02 10:08:25.636 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:26 localhost nova_compute[281045]: 2025-12-02 10:08:26.224 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:26 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:26 localhost podman[315748]: 2025-12-02 10:08:26.261604394 +0000 UTC m=+0.064627527 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:08:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:26.352 2 INFO neutron.agent.securitygroups_rpc [None req-e2093ff2-f702-4cf4-8beb-c324b04696df b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:08:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' Dec 2 05:08:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta' Dec 2 05:08:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch Dec 2 05:08:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:27 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:27.075 2 INFO neutron.agent.securitygroups_rpc [None req-95108d82-ee5d-48ad-b799-8f24c524b687 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']#033[00m Dec 2 05:08:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 37 op/s Dec 2 05:08:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:27.113 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:26Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=491f7b83-ee3f-430e-932f-490e3e05d878, ip_allocation=immediate, mac_address=fa:16:3e:3d:ad:62, name=tempest-RoutersAdminNegativeIpV6Test-807161528, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=True, project_id=13c70d8f74354389b175376619620536, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['20308e6b-d2a0-4e90-a058-a0e30da512e9'], standard_attr_id=1979, status=DOWN, tags=[], tenant_id=13c70d8f74354389b175376619620536, updated_at=2025-12-02T10:08:26Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:27 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:08:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:08:27 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:08:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:27 localhost podman[315787]: 2025-12-02 10:08:27.334072817 +0000 UTC m=+0.069442825 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:08:27 localhost podman[315800]: 2025-12-02 10:08:27.452277719 +0000 UTC m=+0.090374988 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:08:27 localhost podman[315800]: 2025-12-02 10:08:27.466939369 +0000 UTC m=+0.105036678 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:27 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:08:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:27.543 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:27.545 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:27.548 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:27.549 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcb9a95-cb84-4e09-ba20-13cf3905df1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:27 localhost nova_compute[281045]: 2025-12-02 10:08:27.613 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:08:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:08:27 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:08:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:08:27 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:08:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:27.742 262347 INFO neutron.agent.dhcp.agent [None req-d3c736b6-a536-43e4-90e1-0567dd9fd942 - - - - - -] DHCP configuration for ports {'491f7b83-ee3f-430e-932f-490e3e05d878'} is completed#033[00m Dec 2 05:08:27 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:27 localhost podman[315844]: 2025-12-02 10:08:27.883278772 +0000 UTC m=+0.054525036 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:08:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:28 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60995fd9-a7c9-4e80-ba2f-4e09200b332e' of type subvolume Dec 2 05:08:28 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:28.210+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60995fd9-a7c9-4e80-ba2f-4e09200b332e' of type subvolume Dec 2 05:08:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "force": true, "format": "json"}]: dispatch Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e'' moved to trashcan Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:08:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < "" Dec 2 05:08:28 localhost nova_compute[281045]: 2025-12-02 10:08:28.471 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.619 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] Synchronizing state#033[00m Dec 2 05:08:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.907 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:08:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.908 262347 INFO neutron.agent.dhcp.agent [-] Starting network 82491f39-5f47-4832-8f3f-0918125a354c dhcp configuration#033[00m Dec 2 05:08:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.909 262347 INFO neutron.agent.dhcp.agent [-] Finished network 82491f39-5f47-4832-8f3f-0918125a354c dhcp configuration#033[00m Dec 2 05:08:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.909 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] Synchronizing state complete#033[00m Dec 2 05:08:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 17 KiB/s wr, 95 op/s Dec 2 05:08:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:29.179 2 INFO neutron.agent.securitygroups_rpc [None req-0c3b85c4-8ee4-4ede-a5c5-9e006eeb1903 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:29.361 2 INFO neutron.agent.securitygroups_rpc [None req-29a3155c-2369-431f-9930-578d28142354 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']#033[00m Dec 2 05:08:29 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:29.554 2 INFO neutron.agent.securitygroups_rpc [None req-4ad98beb-2033-4528-bdc9-387b15719003 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:29 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:08:29 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:29 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:29 localhost podman[315882]: 2025-12-02 10:08:29.593726319 +0000 UTC m=+0.060582652 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:08:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:29.707 262347 INFO neutron.agent.dhcp.agent [None req-ebba84e8-672c-4fcb-91f2-6a8526455fe8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:29Z, description=, device_id=16d8413f-4499-43df-91d5-75a325d35422, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9fbec2c7-c6c3-4e79-b4aa-9f65686bea53, ip_allocation=immediate, mac_address=fa:16:3e:85:6b:b5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1990, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:29Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:29 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:29 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:29 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:29 localhost podman[315919]: 2025-12-02 10:08:29.870657328 +0000 UTC m=+0.046320754 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:29 localhost systemd[1]: tmp-crun.XOGvZS.mount: Deactivated successfully. Dec 2 05:08:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:30.049 262347 INFO neutron.agent.dhcp.agent [None req-15385913-4925-4c05-8b48-a1cf173ad0cc - - - - - -] DHCP configuration for ports {'9fbec2c7-c6c3-4e79-b4aa-9f65686bea53'} is completed#033[00m Dec 2 05:08:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "format": "json"}]: dispatch Dec 2 05:08:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:08:30 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:30.604 2 INFO neutron.agent.securitygroups_rpc [None req-84026af9-2eee-4701-994f-c9f2d1b31806 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:30 localhost nova_compute[281045]: 2025-12-02 10:08:30.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:30.783 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:31.000 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 14 KiB/s wr, 76 op/s Dec 2 05:08:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e151 e151: 6 total, 6 up, 6 in Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.235497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111235537, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 252, "total_data_size": 408037, "memory_usage": 417736, "flush_reason": "Manual Compaction"} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111241603, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 267212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22409, "largest_seqno": 22874, "table_properties": {"data_size": 264600, "index_size": 659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7423, "raw_average_key_size": 21, "raw_value_size": 259020, "raw_average_value_size": 742, "num_data_blocks": 29, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670101, "oldest_key_time": 1764670101, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 6154 microseconds, and 1994 cpu microseconds. Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.241650) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 267212 bytes OK Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.241671) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244117) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244142) EVENT_LOG_v1 {"time_micros": 1764670111244135, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 405071, prev total WAL file size 405071, number of live WAL files 2. Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244783) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end) Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(260KB)], [33(16MB)] Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111244833, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 17484852, "oldest_snapshot_seqno": -1} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12456 keys, 15368831 bytes, temperature: kUnknown Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111331651, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 15368831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15300752, "index_size": 35850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 336044, "raw_average_key_size": 26, "raw_value_size": 15091207, "raw_average_value_size": 1211, "num_data_blocks": 1340, "num_entries": 12456, "num_filter_entries": 12456, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.331932) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 15368831 bytes Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.334848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.2 rd, 176.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(122.9) write-amplify(57.5) OK, records in: 12979, records dropped: 523 output_compression: NoCompression Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.334883) EVENT_LOG_v1 {"time_micros": 1764670111334867, "job": 18, "event": "compaction_finished", "compaction_time_micros": 86916, "compaction_time_cpu_micros": 47485, "output_level": 6, "num_output_files": 1, "total_output_size": 15368831, "num_input_records": 12979, "num_output_records": 12456, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111335266, "job": 18, "event": "table_file_deletion", "file_number": 35} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111339231, "job": 18, "event": "table_file_deletion", "file_number": 33} Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:32 localhost nova_compute[281045]: 2025-12-02 10:08:32.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s Dec 2 05:08:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e152 e152: 6 total, 6 up, 6 in Dec 2 05:08:33 localhost podman[239757]: time="2025-12-02T10:08:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:08:33 localhost podman[239757]: @ - - [02/Dec/2025:10:08:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:08:33 localhost podman[239757]: @ - - [02/Dec/2025:10:08:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19210 "" "Go-http-client/1.1" Dec 2 05:08:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "target_sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, target_sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 898ff1c3-8d33-4699-8b66-ad70851c4a10 for path b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, target_sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945 Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945) Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:08:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945) -- by 0 seconds Dec 2 05:08:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:33.974 2 INFO neutron.agent.securitygroups_rpc [None req-3fe80696-0018-4728-a361-06aaa88dce01 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' Dec 2 05:08:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta' Dec 2 05:08:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:34.114 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:34 localhost nova_compute[281045]: 2025-12-02 10:08:34.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:34.115 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:08:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s Dec 2 05:08:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:35.356 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:35.358 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:35.361 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:35.362 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[041c7354-45a7-4073-9997-4bd8aa86216e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:08:35 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:08:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:08:35 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:08:35 localhost nova_compute[281045]: 2025-12-02 10:08:35.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:08:36 localhost podman[315983]: 2025-12-02 10:08:36.689742805 +0000 UTC m=+0.067232357 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:36 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:36 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:08:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.snap/9664f206-7cb4-4a1a-b619-2a201c7ebe10/2b81ef38-0fbb-45e8-bcd4-3691655610b8' to b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/b8120ca7-c290-4e3a-9129-3d9f8c2f97c9' Dec 2 05:08:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 8.7 KiB/s wr, 26 op/s Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta.tmp' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta.tmp' to config b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking 898ff1c3-8d33-4699-8b66-ad70851c4a10 Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta' Dec 2 05:08:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945) Dec 2 05:08:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:37 localhost nova_compute[281045]: 2025-12-02 10:08:37.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:38 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:38.065 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:37Z, description=, device_id=c7017fcc-0436-48d2-a91c-540638aa1a1f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0fe53480-3bce-4fec-ade2-8b47c031657a, ip_allocation=immediate, mac_address=fa:16:3e:90:66:d1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2032, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:37Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:38.658 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:38.659 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:38.660 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:38.661 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[abf91245-d36f-46f5-8e0e-c2673f8635a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:38 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:38 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:38 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:38 localhost podman[316021]: 2025-12-02 10:08:38.748904787 +0000 UTC m=+0.057778767 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:08:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 32 KiB/s wr, 53 op/s Dec 2 05:08:39 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:39.177 262347 INFO neutron.agent.dhcp.agent [None req-b09aea15-8d48-4c40-85c0-6b4bcd469730 - - - - - -] DHCP configuration for ports {'0fe53480-3bce-4fec-ade2-8b47c031657a'} is completed#033[00m Dec 2 05:08:40 localhost nova_compute[281045]: 2025-12-02 10:08:40.725 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:40 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8796ecd6-5c3f-49b4-a11b-51a83206e216' of type subvolume Dec 2 05:08:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:40.761+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8796ecd6-5c3f-49b4-a11b-51a83206e216' of type subvolume Dec 2 05:08:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "force": true, "format": "json"}]: dispatch Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216'' moved to trashcan Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:08:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < "" Dec 2 05:08:40 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:40.866 2 INFO neutron.agent.securitygroups_rpc [None req-33ec59f6-70cd-4828-b040-1367d796c3cf 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 43 op/s Dec 2 05:08:41 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:41.117 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:08:41 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:41.223 2 INFO neutron.agent.securitygroups_rpc [None req-67c167f7-d811-43ca-8236-d9881acaf013 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 e153: 6 total, 6 up, 6 in Dec 2 05:08:41 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:41.528 2 INFO neutron.agent.securitygroups_rpc [None req-77ccc14a-3033-433b-916a-b05c2a4a2183 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:42 localhost openstack_network_exporter[241816]: ERROR 10:08:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:08:42 localhost openstack_network_exporter[241816]: ERROR 10:08:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:42 localhost openstack_network_exporter[241816]: ERROR 10:08:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:42 localhost openstack_network_exporter[241816]: ERROR 10:08:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:08:42 localhost openstack_network_exporter[241816]: Dec 2 05:08:42 localhost openstack_network_exporter[241816]: ERROR 10:08:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:08:42 localhost openstack_network_exporter[241816]: Dec 2 05:08:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:42 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:42.499 2 INFO neutron.agent.securitygroups_rpc [None req-fd3bc9cf-ed5a-495f-beed-1c7d898feb8a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:42 localhost nova_compute[281045]: 2025-12-02 10:08:42.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:42 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:42.951 2 INFO neutron.agent.securitygroups_rpc [None req-98ebf0df-0324-4fc7-82f5-9efe0544203a 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 24 op/s Dec 2 05:08:43 localhost systemd[1]: tmp-crun.MyRFUn.mount: Deactivated successfully. Dec 2 05:08:43 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:08:43 localhost podman[316059]: 2025-12-02 10:08:43.843807026 +0000 UTC m=+0.055258678 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:43 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:08:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:08:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:08:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:08:43 localhost podman[316073]: 2025-12-02 10:08:43.9503552 +0000 UTC m=+0.084709934 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:08:43 localhost podman[316077]: 2025-12-02 10:08:43.992838366 +0000 UTC m=+0.120354649 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:08:44 localhost podman[316074]: 2025-12-02 10:08:43.999718237 +0000 UTC m=+0.128732376 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:08:44 localhost podman[316074]: 2025-12-02 10:08:44.009809158 +0000 UTC m=+0.138823317 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:08:44 localhost podman[316075]: 2025-12-02 10:08:43.969133127 +0000 UTC m=+0.096874207 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:44 localhost podman[316077]: 2025-12-02 10:08:44.063074444 +0000 UTC m=+0.190590717 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:08:44 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:08:44 localhost podman[316075]: 2025-12-02 10:08:44.082763339 +0000 UTC m=+0.210504429 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 05:08:44 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:08:44 localhost podman[316073]: 2025-12-02 10:08:44.099526354 +0000 UTC m=+0.233881098 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:08:44 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:08:44 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:08:44 localhost systemd[1]: tmp-crun.obnRm0.mount: Deactivated successfully. Dec 2 05:08:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s Dec 2 05:08:45 localhost nova_compute[281045]: 2025-12-02 10:08:45.727 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:45 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:45.825 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:45Z, description=, device_id=aa61d6a1-1090-4f06-abf3-fa0ed7c99a0f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f08da1e0-d5a3-4e95-9204-7c3f32b4d715, ip_allocation=immediate, mac_address=fa:16:3e:83:07:58, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2057, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:45Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:45 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:45.829 2 INFO neutron.agent.securitygroups_rpc [None req-a2676272-e4a7-4aab-af43-dd7cd656aeb3 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:46 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:46 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:46 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:46 localhost podman[316177]: 2025-12-02 10:08:46.05081684 +0000 UTC m=+0.049102059 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:46 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:46.419 262347 INFO neutron.agent.dhcp.agent [None req-c6915532-ee5a-4d49-9550-16e481e85ae6 - - - - - -] DHCP configuration for ports {'f08da1e0-d5a3-4e95-9204-7c3f32b4d715'} is completed#033[00m Dec 2 05:08:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s Dec 2 05:08:47 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:47.337 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:46Z, description=, device_id=69f8516c-fa8d-4437-9829-7d5c8ddbd262, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=58f532c0-e523-4491-9cfc-d85fc74c7485, ip_allocation=immediate, mac_address=fa:16:3e:0f:8f:1f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2061, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:47Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:47.360 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:47.362 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:47.364 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:47.364 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f47f91e6-21cf-4e33-be0b-4f87e4b0a644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:47 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:08:47 localhost podman[316215]: 2025-12-02 10:08:47.545620251 +0000 UTC m=+0.056620861 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:08:47 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:47 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:47 localhost nova_compute[281045]: 2025-12-02 10:08:47.694 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:47 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:47.860 262347 INFO neutron.agent.dhcp.agent [None req-7a1811ca-8a4f-41de-9de7-7c5ae4b0c2d3 - - - - - -] DHCP configuration for ports {'58f532c0-e523-4491-9cfc-d85fc74c7485'} is completed#033[00m Dec 2 05:08:48 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:48.544 2 INFO neutron.agent.securitygroups_rpc [None req-1e3a85d5-a4d4-4ac9-b4fb-7c32fb08bdf0 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s Dec 2 05:08:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch Dec 2 05:08:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:49 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:49.955 2 INFO neutron.agent.securitygroups_rpc [None req-59adf8f3-045e-4261-a367-eea8612462ef 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:50 localhost nova_compute[281045]: 2025-12-02 10:08:50.731 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s Dec 2 05:08:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:51.457 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:51.460 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:51.462 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:08:51.463 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db27c9c6-a71b-45f5-8752-bbd1d1e56256]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:08:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:08:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch Dec 2 05:08:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:52 localhost podman[316236]: 2025-12-02 10:08:52.075110537 +0000 UTC m=+0.081698611 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:08:52 localhost podman[316236]: 2025-12-02 10:08:52.113105714 +0000 UTC m=+0.119693788 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:08:52 localhost systemd[1]: tmp-crun.amLQFU.mount: Deactivated successfully. Dec 2 05:08:52 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:08:52 localhost podman[316237]: 2025-12-02 10:08:52.136648118 +0000 UTC m=+0.139109226 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9) Dec 2 05:08:52 localhost podman[316237]: 2025-12-02 10:08:52.152996149 +0000 UTC m=+0.155457257 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Dec 2 05:08:52 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:08:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:52 localhost nova_compute[281045]: 2025-12-02 10:08:52.722 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 517 B/s rd, 8.7 KiB/s wr, 3 op/s Dec 2 05:08:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:53.584 2 INFO neutron.agent.securitygroups_rpc [None req-e08d9635-b9e8-48c9-978b-72b4270a2462 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:54 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:08:54 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:54 localhost podman[316296]: 2025-12-02 10:08:54.480094623 +0000 UTC m=+0.057353803 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:08:54 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:54.611 2 INFO neutron.agent.securitygroups_rpc [None req-3c6b2ae8-8852-4b91-8b57-db72052e455d 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:55.004 2 INFO neutron.agent.securitygroups_rpc [None req-c6b91e56-d8d3-49df-8fb0-b7b5f6e00308 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s Dec 2 05:08:55 localhost nova_compute[281045]: 2025-12-02 10:08:55.732 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:55.871 2 INFO neutron.agent.securitygroups_rpc [None req-62ccdf95-04fd-49bc-8e08-6a4afcc10f44 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:56 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:56.443 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:55Z, description=, device_id=e151137c-6a3f-4ce9-9721-f5df26cfefd0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fbfa2337-e476-4b17-b43e-e595dbb78ebf, ip_allocation=immediate, mac_address=fa:16:3e:12:b0:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2083, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:55Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:08:56 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:56.645 2 INFO neutron.agent.securitygroups_rpc [None req-2779f3cb-e43d-46a6-b42c-1a159a69c67f b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:56 localhost systemd[1]: tmp-crun.CfQFpp.mount: Deactivated successfully. Dec 2 05:08:56 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:08:56 localhost podman[316334]: 2025-12-02 10:08:56.710209726 +0000 UTC m=+0.057881899 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:56 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:08:56 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:08:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s Dec 2 05:08:57 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:08:57.206 262347 INFO neutron.agent.dhcp.agent [None req-30cf4cb4-3e3a-45a2-982a-3610b3cbd3c9 - - - - - -] DHCP configuration for ports {'fbfa2337-e476-4b17-b43e-e595dbb78ebf'} is completed#033[00m Dec 2 05:08:57 localhost neutron_sriov_agent[255428]: 2025-12-02 10:08:57.460 2 INFO neutron.agent.securitygroups_rpc [None req-675bc92b-ef71-4946-a4ea-2f67c0d27bea 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:08:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "force": true, "format": "json"}]: dispatch Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945'' moved to trashcan Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:08:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < "" Dec 2 05:08:57 localhost nova_compute[281045]: 2025-12-02 10:08:57.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:08:58 localhost podman[316354]: 2025-12-02 10:08:58.072615768 +0000 UTC m=+0.080634009 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:58 localhost podman[316354]: 2025-12-02 10:08:58.109249704 +0000 UTC m=+0.117267925 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:08:58 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:08:58 localhost nova_compute[281045]: 2025-12-02 10:08:58.993 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 4.6 KiB/s wr, 1 op/s Dec 2 05:09:00 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:00.328 2 INFO neutron.agent.securitygroups_rpc [None req-87a4be36-2a80-4d5d-bf3f-f65722b03fc3 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:00 localhost nova_compute[281045]: 2025-12-02 10:09:00.547 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:00 localhost nova_compute[281045]: 2025-12-02 10:09:00.734 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:00 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:00.769 2 INFO neutron.agent.securitygroups_rpc [None req-a8153026-65f9-484e-923d-c2362124502e 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047", "force": true, "format": "json"}]: dispatch Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta' Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "force": true, "format": "json"}]: dispatch Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta' Dec 2 05:09:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 2.6 KiB/s wr, 0 op/s Dec 2 05:09:01 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:01.840 2 INFO neutron.agent.securitygroups_rpc [None req-a5be6bf0-ce48-4e65-99a4-3416f730b3a2 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:02 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:09:02 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:02 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:02 localhost podman[316392]: 2025-12-02 10:09:02.389488312 +0000 UTC m=+0.057434697 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:09:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:02 localhost nova_compute[281045]: 2025-12-02 10:09:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:02 localhost nova_compute[281045]: 2025-12-02 10:09:02.814 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 7.9 KiB/s wr, 2 op/s Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:03 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:03.315 2 INFO neutron.agent.securitygroups_rpc [None req-11b93b80-ea7f-4df4-8312-05f04742e794 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.412 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.414 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.417 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:03.418 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad65aaf-7c38-477d-96d6-ec4d54696013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:03.421 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:03Z, description=, device_id=011c4c6c-4d4d-4af9-a901-565e82aa7620, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1952c163-508d-4f30-a86f-38b1231d5724, ip_allocation=immediate, mac_address=fa:16:3e:bb:fb:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2109, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:03Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:09:03 localhost nova_compute[281045]: 2025-12-02 10:09:03.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:03 localhost nova_compute[281045]: 2025-12-02 10:09:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:03 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:09:03 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:03 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:03 localhost podman[316429]: 2025-12-02 10:09:03.599497901 +0000 UTC m=+0.041113434 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:03 localhost podman[239757]: time="2025-12-02T10:09:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:09:03 localhost podman[239757]: @ - - [02/Dec/2025:10:09:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:09:03 localhost podman[239757]: @ - - [02/Dec/2025:10:09:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1" Dec 2 05:09:04 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:04.053 262347 INFO neutron.agent.dhcp.agent [None req-bce2d02b-99cb-4ef0-89f9-a6fca7912bbb - - - - - -] DHCP configuration for ports {'1952c163-508d-4f30-a86f-38b1231d5724'} is completed#033[00m Dec 2 05:09:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:09:04 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6' of type subvolume Dec 2 05:09:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:09:04.201+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6' of type subvolume Dec 2 05:09:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "force": true, "format": "json"}]: dispatch Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6'' moved to trashcan Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:09:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < "" Dec 2 05:09:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:09:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:09:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:09:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:09:04 localhost nova_compute[281045]: 2025-12-02 10:09:04.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:04 localhost nova_compute[281045]: 2025-12-02 10:09:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:09:04 localhost nova_compute[281045]: 2025-12-02 10:09:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:09:04 localhost nova_compute[281045]: 2025-12-02 10:09:04.552 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:09:05 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:09:05 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:05 localhost podman[316466]: 2025-12-02 10:09:05.065847628 +0000 UTC m=+0.059318374 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:05 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.548 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:09:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:05.572 2 INFO neutron.agent.securitygroups_rpc [None req-d150629a-bcce-4a38-b00c-70964b564cd8 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:05 localhost nova_compute[281045]: 2025-12-02 10:09:05.735 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:05 localhost systemd[1]: tmp-crun.ZguiCm.mount: Deactivated successfully. Dec 2 05:09:06 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:09:06 localhost podman[316525]: 2025-12-02 10:09:06.002313721 +0000 UTC m=+0.069605819 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:09:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4130488056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.050 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.225 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.226 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11531MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.227 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.227 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.346 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:09:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:06.684 2 INFO neutron.agent.securitygroups_rpc [None req-eff2d2a9-9509-4ec3-933e-196163edb064 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:09:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1335571893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.807 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.813 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.831 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.834 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:09:06 localhost nova_compute[281045]: 2025-12-02 10:09:06.835 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:09:06 Dec 2 05:09:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:09:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:09:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'volumes', 'manila_metadata', 'manila_data', 'images'] Dec 2 05:09:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:09:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:09:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.307562116136237e-05 of space, bias 4.0, pg target 0.03428819444444444 quantized to 16 (current 16) Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:09:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:09:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e154 e154: 6 total, 6 up, 6 in Dec 2 05:09:07 localhost nova_compute[281045]: 2025-12-02 10:09:07.818 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:07 localhost nova_compute[281045]: 2025-12-02 10:09:07.835 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s Dec 2 05:09:09 localhost nova_compute[281045]: 2025-12-02 10:09:09.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:09 localhost nova_compute[281045]: 2025-12-02 10:09:09.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:09:10 localhost nova_compute[281045]: 2025-12-02 10:09:10.738 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s Dec 2 05:09:11 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:11.261 2 INFO neutron.agent.securitygroups_rpc [None req-d69c9fb8-aada-452c-807d-ffbf23ad4dde b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:11 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:11.330 2 INFO neutron.agent.securitygroups_rpc [None req-158ba6e6-ae47-4633-afb7-8fe1fff090db 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:12 localhost openstack_network_exporter[241816]: ERROR 10:09:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:09:12 localhost openstack_network_exporter[241816]: ERROR 10:09:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:09:12 localhost openstack_network_exporter[241816]: Dec 2 05:09:12 localhost openstack_network_exporter[241816]: ERROR 10:09:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:12 localhost openstack_network_exporter[241816]: ERROR 10:09:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:12 localhost openstack_network_exporter[241816]: ERROR 10:09:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:09:12 localhost openstack_network_exporter[241816]: Dec 2 05:09:12 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:12.331 2 INFO neutron.agent.securitygroups_rpc [None req-9fd8a609-568a-4b57-8025-f518255ff815 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:12 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:12.452 2 INFO neutron.agent.securitygroups_rpc [None req-4d3ff4f1-7788-4535-9205-e4647a2c3ad1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:12 localhost nova_compute[281045]: 2025-12-02 10:09:12.821 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s Dec 2 05:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:09:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:09:15 localhost podman[316579]: 2025-12-02 10:09:15.085556591 +0000 UTC m=+0.074596974 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s Dec 2 05:09:15 localhost podman[316579]: 2025-12-02 10:09:15.125117966 +0000 UTC m=+0.114158259 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:15 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:09:15 localhost podman[316573]: 2025-12-02 10:09:15.141552511 +0000 UTC m=+0.132033139 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:15 localhost podman[316573]: 2025-12-02 10:09:15.156936063 +0000 UTC m=+0.147416671 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:09:15 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:09:15 localhost podman[316572]: 2025-12-02 10:09:15.253932124 +0000 UTC m=+0.248687272 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:09:15 localhost podman[316572]: 2025-12-02 10:09:15.287064782 +0000 UTC m=+0.281819930 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:09:15 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:09:15 localhost podman[316571]: 2025-12-02 10:09:15.301889618 +0000 UTC m=+0.300539396 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:09:15 localhost podman[316571]: 2025-12-02 10:09:15.309986356 +0000 UTC m=+0.308636164 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:09:15 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:09:15 localhost nova_compute[281045]: 2025-12-02 10:09:15.791 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 e155: 6 total, 6 up, 6 in Dec 2 05:09:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:16.738 262347 INFO neutron.agent.linux.ip_lib [None req-79eb6d67-53bd-4790-9715-8c98e8b30979 - - - - - -] Device tap8d7aba05-5e cannot be used as it has no MAC address#033[00m Dec 2 05:09:16 localhost nova_compute[281045]: 2025-12-02 10:09:16.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost kernel: device tap8d7aba05-5e entered promiscuous mode Dec 2 05:09:16 localhost NetworkManager[5967]: [1764670156.7698] manager: (tap8d7aba05-5e): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Dec 2 05:09:16 localhost ovn_controller[153778]: 2025-12-02T10:09:16Z|00142|binding|INFO|Claiming lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 for this chassis. Dec 2 05:09:16 localhost nova_compute[281045]: 2025-12-02 10:09:16.769 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost ovn_controller[153778]: 2025-12-02T10:09:16Z|00143|binding|INFO|8d7aba05-5eab-44a1-aacc-c2b62f525db1: Claiming unknown Dec 2 05:09:16 localhost systemd-udevd[316666]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:16 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:16.782 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '873db74a4a7a4aad823d1b7e8b2d6c26', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d91ccfa3-a134-4f2e-be7a-020d064cc147, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8d7aba05-5eab-44a1-aacc-c2b62f525db1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:16 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:16.784 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7aba05-5eab-44a1-aacc-c2b62f525db1 in datapath 669f1b01-3857-4ea6-8083-25e0b2ce70bc bound to our chassis#033[00m Dec 2 05:09:16 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:16.785 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 669f1b01-3857-4ea6-8083-25e0b2ce70bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:16 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:16.786 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[77643317-c545-47b8-970d-72a95d765161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:16 localhost ovn_controller[153778]: 2025-12-02T10:09:16Z|00144|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 ovn-installed in OVS Dec 2 05:09:16 localhost ovn_controller[153778]: 2025-12-02T10:09:16Z|00145|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 up in Southbound Dec 2 05:09:16 localhost nova_compute[281045]: 2025-12-02 10:09:16.808 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost nova_compute[281045]: 2025-12-02 10:09:16.844 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost journal[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device Dec 2 05:09:16 localhost nova_compute[281045]: 2025-12-02 10:09:16.875 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:17.031 262347 INFO neutron.agent.linux.ip_lib [None req-a65c67f7-0dca-4ef5-9c28-9edf2bb2ef9b - - - - - -] Device tap21d38e5b-83 cannot be used as it has no MAC address#033[00m Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost kernel: device tap21d38e5b-83 entered promiscuous mode Dec 2 05:09:17 localhost NetworkManager[5967]: [1764670157.0616] manager: (tap21d38e5b-83): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.061 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost systemd-udevd[316668]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:17 localhost ovn_controller[153778]: 2025-12-02T10:09:17Z|00146|binding|INFO|Claiming lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 for this chassis. Dec 2 05:09:17 localhost ovn_controller[153778]: 2025-12-02T10:09:17Z|00147|binding|INFO|21d38e5b-83d6-443b-a9e2-4f6016ed9773: Claiming unknown Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.081 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost ovn_controller[153778]: 2025-12-02T10:09:17Z|00148|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 ovn-installed in OVS Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 9.4 KiB/s wr, 5 op/s Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.178 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.218 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.245 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:17 localhost podman[316775]: Dec 2 05:09:17 localhost podman[316775]: 2025-12-02 10:09:17.697168056 +0000 UTC m=+0.053744622 container create 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:09:17 localhost systemd[1]: Started libpod-conmon-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope. Dec 2 05:09:17 localhost systemd[1]: Started libcrun container. Dec 2 05:09:17 localhost podman[316775]: 2025-12-02 10:09:17.673999095 +0000 UTC m=+0.030575681 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a3d1465d9dc770bb3a503daf7256c2208118b497345b7f792f0cc2082142ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:17 localhost podman[316775]: 2025-12-02 10:09:17.785158819 +0000 UTC m=+0.141735395 container init 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:09:17 localhost podman[316775]: 2025-12-02 10:09:17.794223688 +0000 UTC m=+0.150800264 container start 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:09:17 localhost dnsmasq[316793]: started, version 2.85 cachesize 150 Dec 2 05:09:17 localhost dnsmasq[316793]: DNS service limited to local subnets Dec 2 05:09:17 localhost dnsmasq[316793]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:17 localhost dnsmasq[316793]: warning: no upstream servers configured Dec 2 05:09:17 localhost dnsmasq-dhcp[316793]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:17 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 0 addresses Dec 2 05:09:17 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:17 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:17 localhost nova_compute[281045]: 2025-12-02 10:09:17.824 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:18 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:18.014 2 INFO neutron.agent.securitygroups_rpc [None req-1b7dd085-a5c1-4a81-bd02-4cabc7845a6f f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:18 localhost ovn_controller[153778]: 2025-12-02T10:09:18Z|00149|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 up in Southbound Dec 2 05:09:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:18.263 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '204a1137a20e40c995bb9cd512e75a5c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45beaf16-b07c-44e8-bec5-71e8573d4df7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21d38e5b-83d6-443b-a9e2-4f6016ed9773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:18.265 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21d38e5b-83d6-443b-a9e2-4f6016ed9773 in datapath 1d564796-ac76-41ff-8a1e-6fbd19d356c5 bound to our chassis#033[00m Dec 2 05:09:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:18.267 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1d564796-ac76-41ff-8a1e-6fbd19d356c5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:18.267 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[67afd2e2-135e-4642-9917-d3dbea130796]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:18.985 262347 INFO neutron.agent.dhcp.agent [None req-0d750567-274c-4482-bf1b-de6317782f28 - - - - - -] DHCP configuration for ports {'bc29dfa6-ee29-4c8e-8916-3c71b290ac02'} is completed#033[00m Dec 2 05:09:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:18.987 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:16Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=df3b3a5c-94ef-40df-81fb-64a96ab7af31, ip_allocation=immediate, mac_address=fa:16:3e:ad:98:42, name=tempest-AllowedAddressPairIpV6TestJSON-2067363510, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2177, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:17Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s Dec 2 05:09:19 localhost systemd[1]: tmp-crun.v2soeA.mount: Deactivated successfully. Dec 2 05:09:19 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses Dec 2 05:09:19 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:19 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:19 localhost podman[316813]: 2025-12-02 10:09:19.180959858 +0000 UTC m=+0.066784753 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:09:19 localhost podman[316857]: Dec 2 05:09:19 localhost podman[316857]: 2025-12-02 10:09:19.400791752 +0000 UTC m=+0.076567793 container create 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:09:19 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:19.425 2 INFO neutron.agent.securitygroups_rpc [None req-278e88f3-562a-4f7b-8f10-c3e2bfd4ee2e 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']#033[00m Dec 2 05:09:19 localhost systemd[1]: Started libpod-conmon-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope. Dec 2 05:09:19 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:19.444 262347 INFO neutron.agent.dhcp.agent [None req-2d29b2b2-88a6-434e-bf1f-1144e3e682fc - - - - - -] DHCP configuration for ports {'df3b3a5c-94ef-40df-81fb-64a96ab7af31'} is completed#033[00m Dec 2 05:09:19 localhost systemd[1]: Started libcrun container. Dec 2 05:09:19 localhost podman[316857]: 2025-12-02 10:09:19.360493855 +0000 UTC m=+0.036269896 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1d49012998774e6961c351ee30dc6327212e570cd42de9146733d20d3604cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:19 localhost podman[316857]: 2025-12-02 10:09:19.471615418 +0000 UTC m=+0.147391459 container init 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:09:19 localhost podman[316857]: 2025-12-02 10:09:19.481679538 +0000 UTC m=+0.157455579 container start 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:09:19 localhost dnsmasq[316876]: started, version 2.85 cachesize 150 Dec 2 05:09:19 localhost dnsmasq[316876]: DNS service limited to local subnets Dec 2 05:09:19 localhost dnsmasq[316876]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:19 localhost dnsmasq[316876]: warning: no upstream servers configured Dec 2 05:09:19 localhost dnsmasq-dhcp[316876]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:19 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 0 addresses Dec 2 05:09:19 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:19 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:19 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:19.601 262347 INFO neutron.agent.dhcp.agent [None req-7bdc6919-1204-438f-8683-d92b4f509fd2 - - - - - -] DHCP configuration for ports {'d0ba16e3-7d3e-48b5-87da-d48deb1e0c57'} is completed#033[00m Dec 2 05:09:19 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:19.739 2 INFO neutron.agent.securitygroups_rpc [None req-e9fc3440-8683-40fd-946b-446e84f960a4 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:20 localhost nova_compute[281045]: 2025-12-02 10:09:20.792 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s Dec 2 05:09:21 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:21.179 2 INFO neutron.agent.securitygroups_rpc [None req-8371118d-5c83-45c5-bfa7-f542b4f1df3f 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:21 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:21.512 2 INFO neutron.agent.securitygroups_rpc [None req-10f867de-2584-4ed7-a0e8-fb9276ac33a8 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:21 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:09:21 localhost podman[316894]: 2025-12-02 10:09:21.544733308 +0000 UTC m=+0.052680280 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:21 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:21 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:21.596 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:20Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f24d2970-a1cd-433b-82f0-ff8977158681, ip_allocation=immediate, mac_address=fa:16:3e:26:7b:6b, name=tempest-AllowedAddressPairIpV6TestJSON-328736239, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2182, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:20Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:21 localhost podman[316932]: 2025-12-02 10:09:21.753543804 +0000 UTC m=+0.043070795 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:09:21 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses Dec 2 05:09:21 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:21 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:09:21 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:09:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:09:21 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:09:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.031 262347 INFO neutron.agent.dhcp.agent [None req-a364e023-3b56-4b4f-94dd-a490e36fe97e - - - - - -] DHCP configuration for ports {'f24d2970-a1cd-433b-82f0-ff8977158681'} is completed#033[00m Dec 2 05:09:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.202 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:21Z, description=, device_id=f27b9429-6ac7-4a48-8a17-a61fa778ae6e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=65e2923a-c07d-4b60-8e34-1e64cbdb4494, ip_allocation=immediate, mac_address=fa:16:3e:7f:f9:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=False, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2191, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:21Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5#033[00m Dec 2 05:09:22 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses Dec 2 05:09:22 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:22 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:22 localhost podman[316970]: 2025-12-02 10:09:22.444602759 +0000 UTC m=+0.073051247 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:09:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:09:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:09:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:22 localhost podman[316986]: 2025-12-02 10:09:22.539687459 +0000 UTC m=+0.068461004 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Dec 2 05:09:22 localhost systemd[1]: tmp-crun.phhLTX.mount: Deactivated successfully. Dec 2 05:09:22 localhost podman[316985]: 2025-12-02 10:09:22.585819407 +0000 UTC m=+0.114745746 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:09:22 localhost podman[316985]: 2025-12-02 10:09:22.599746495 +0000 UTC m=+0.128672854 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:09:22 localhost podman[316986]: 2025-12-02 10:09:22.610812175 +0000 UTC m=+0.139585680 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6) Dec 2 05:09:22 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:09:22 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:09:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.679 262347 INFO neutron.agent.dhcp.agent [None req-3439f936-72de-446d-b723-e4be82544543 - - - - - -] DHCP configuration for ports {'65e2923a-c07d-4b60-8e34-1e64cbdb4494'} is completed#033[00m Dec 2 05:09:22 localhost nova_compute[281045]: 2025-12-02 10:09:22.827 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:23 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:23.060 2 INFO neutron.agent.securitygroups_rpc [None req-7bc34f63-f96a-4396-b70c-07601d07dee2 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s Dec 2 05:09:23 localhost systemd[1]: tmp-crun.IVs1lO.mount: Deactivated successfully. Dec 2 05:09:23 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses Dec 2 05:09:23 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:23 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:23 localhost podman[317050]: 2025-12-02 10:09:23.30057538 +0000 UTC m=+0.045415527 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:23 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:23.773 2 INFO neutron.agent.securitygroups_rpc [None req-ef3a9568-c379-4b2a-a06d-b347ad68d0c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:23 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:23.834 2 INFO neutron.agent.securitygroups_rpc [None req-f87c0fb9-70c4-4316-8fe5-2d1d482ef952 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:23 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:23.876 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e381631f-20a2-4180-bd13-c2a6a1575aa1, ip_allocation=immediate, mac_address=fa:16:3e:9c:77:0d, name=tempest-AllowedAddressPairIpV6TestJSON-1257736356, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2202, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:23Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:24 localhost podman[317090]: 2025-12-02 10:09:24.039124313 +0000 UTC m=+0.053422262 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:24 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses Dec 2 05:09:24 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:24 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:24 localhost systemd[1]: tmp-crun.TsgDIv.mount: Deactivated successfully. Dec 2 05:09:24 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:24.259 262347 INFO neutron.agent.dhcp.agent [None req-42c91359-43e3-4299-b62e-ab632e3bddac - - - - - -] DHCP configuration for ports {'e381631f-20a2-4180-bd13-c2a6a1575aa1'} is completed#033[00m Dec 2 05:09:24 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:24.982 2 INFO neutron.agent.securitygroups_rpc [None req-b3aa5b43-46a5-4652-aa07-2f62355aecf1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:25.013 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:21Z, description=, device_id=f27b9429-6ac7-4a48-8a17-a61fa778ae6e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=65e2923a-c07d-4b60-8e34-1e64cbdb4494, ip_allocation=immediate, mac_address=fa:16:3e:7f:f9:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=False, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2191, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:21Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5#033[00m Dec 2 05:09:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s Dec 2 05:09:25 localhost systemd[1]: tmp-crun.bqGfNW.mount: Deactivated successfully. Dec 2 05:09:25 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses Dec 2 05:09:25 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:25 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:25 localhost podman[317192]: 2025-12-02 10:09:25.259254473 +0000 UTC m=+0.063719738 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:09:25 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:09:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:09:25 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:09:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:09:25 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:09:25 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:09:25 localhost ceph-mgr[287188]: [progress INFO root] Completed event 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:09:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:09:25 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:09:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:09:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:09:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:25.511 262347 INFO neutron.agent.dhcp.agent [None req-320a362b-79a2-4930-bdff-79a7dbd98438 - - - - - -] DHCP configuration for ports {'65e2923a-c07d-4b60-8e34-1e64cbdb4494'} is completed#033[00m Dec 2 05:09:25 localhost nova_compute[281045]: 2025-12-02 10:09:25.794 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:26.131 2 INFO neutron.agent.securitygroups_rpc [None req-28c3366c-1c91-49f5-b694-3c934cb049e5 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta.tmp' Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta.tmp' to config b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta' Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:26 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses Dec 2 05:09:26 localhost podman[317249]: 2025-12-02 10:09:26.344375586 +0000 UTC m=+0.048719028 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:26 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:26 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:26.379 2 INFO neutron.agent.securitygroups_rpc [None req-ec4a6adc-d68b-4418-9c41-c326e9a3fc34 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']#033[00m Dec 2 05:09:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:26.414 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:25Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c8669427-008b-4e2f-86c2-feca028efb63, ip_allocation=immediate, mac_address=fa:16:3e:cf:64:6b, name=tempest-FloatingIPNegativeTestJSON-119382670, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['53fe5435-6101-4ff1-81ad-b53da833172b'], standard_attr_id=2208, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:26Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5#033[00m Dec 2 05:09:26 localhost podman[317287]: 2025-12-02 10:09:26.647355026 +0000 UTC m=+0.059505360 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:09:26 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 2 addresses Dec 2 05:09:26 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:26 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.038 262347 INFO neutron.agent.dhcp.agent [None req-6062ca53-8e6b-4d90-9e74-cc59a3f0a59c - - - - - -] DHCP configuration for ports {'c8669427-008b-4e2f-86c2-feca028efb63'} is completed#033[00m Dec 2 05:09:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 1.8 KiB/s wr, 61 op/s Dec 2 05:09:27 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:09:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:09:27 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:27.367 2 INFO neutron.agent.securitygroups_rpc [None req-a7f54859-efcd-4ecf-b40a-33f0bd3f4545 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:09:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.443 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:26Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dccd0b7e-05df-4bac-ab13-aec6a2f86fc9, ip_allocation=immediate, mac_address=fa:16:3e:db:d1:63, name=tempest-AllowedAddressPairIpV6TestJSON-1534878287, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2215, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:26Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:27 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses Dec 2 05:09:27 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:27 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:27.616 2 INFO neutron.agent.securitygroups_rpc [None req-d5002157-4534-49a0-a135-4c64a8485ed7 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']#033[00m Dec 2 05:09:27 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:27 localhost podman[317326]: 2025-12-02 10:09:27.616483284 +0000 UTC m=+0.050576925 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:27 localhost nova_compute[281045]: 2025-12-02 10:09:27.829 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.896 262347 INFO neutron.agent.dhcp.agent [None req-0e50c9c6-ba00-4350-93a5-583ee1d66e85 - - - - - -] DHCP configuration for ports {'dccd0b7e-05df-4bac-ab13-aec6a2f86fc9'} is completed#033[00m Dec 2 05:09:28 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:28.196 2 INFO neutron.agent.securitygroups_rpc [None req-31d4af97-7fb6-4706-a5b2-299b30ee98fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 4.7 KiB/s wr, 56 op/s Dec 2 05:09:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:09:29 localhost podman[317348]: 2025-12-02 10:09:29.359501371 +0000 UTC m=+0.079707511 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:09:29 localhost podman[317348]: 2025-12-02 10:09:29.374921524 +0000 UTC m=+0.095127674 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:09:29 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:09:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:09:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:09:30 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:30 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:09:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:09:30 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:30 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:30.732 2 INFO neutron.agent.securitygroups_rpc [None req-b3ef0962-bb50-4849-b9de-83492a397177 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:30 localhost nova_compute[281045]: 2025-12-02 10:09:30.797 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:30 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses Dec 2 05:09:30 localhost podman[317384]: 2025-12-02 10:09:30.931573076 +0000 UTC m=+0.061173031 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:09:30 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:30 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 3.6 KiB/s wr, 27 op/s Dec 2 05:09:31 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:31.262 2 INFO neutron.agent.securitygroups_rpc [None req-b62fea3d-778e-4171-9633-628f1b789028 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:31 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:32 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:32.469 2 INFO neutron.agent.securitygroups_rpc [None req-1d81950f-2cd1-4171-b1b7-8ccf81612998 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:32.507 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:31Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d1cb1123-b6c2-4482-8391-36bb1cb58ab8, ip_allocation=immediate, mac_address=fa:16:3e:18:fb:83, name=tempest-AllowedAddressPairIpV6TestJSON-890555371, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:31Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:32 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses Dec 2 05:09:32 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:32 localhost podman[317424]: 2025-12-02 10:09:32.696144345 +0000 UTC m=+0.054738973 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:32 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:32 localhost nova_compute[281045]: 2025-12-02 10:09:32.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:33.079 262347 INFO neutron.agent.dhcp.agent [None req-f2ac2ad4-5582-4713-9d09-3251bba1f736 - - - - - -] DHCP configuration for ports {'d1cb1123-b6c2-4482-8391-36bb1cb58ab8'} is completed#033[00m Dec 2 05:09:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 8.7 KiB/s wr, 29 op/s Dec 2 05:09:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:33.506 2 INFO neutron.agent.securitygroups_rpc [None req-2ab86868-457f-4852-a90e-5fcf962a86b2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:33 localhost podman[239757]: time="2025-12-02T10:09:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:09:33 localhost podman[239757]: @ - - [02/Dec/2025:10:09:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160387 "" "Go-http-client/1.1" Dec 2 05:09:33 localhost podman[239757]: @ - - [02/Dec/2025:10:09:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20166 "" "Go-http-client/1.1" Dec 2 05:09:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:33.706 2 INFO neutron.agent.securitygroups_rpc [None req-4574e29d-6803-42ec-b043-afe3e9e41c81 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:33.748 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ac73148c-331f-4c9c-ac3a-daf3156fc54d, ip_allocation=immediate, mac_address=fa:16:3e:94:0e:a8, name=tempest-AllowedAddressPairIpV6TestJSON-603020251, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2236, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:33Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc#033[00m Dec 2 05:09:33 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 3 addresses Dec 2 05:09:33 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:33 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:33 localhost podman[317463]: 2025-12-02 10:09:33.912958733 +0000 UTC m=+0.046818918 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:09:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:09:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:09:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:09:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:09:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:09:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:09:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:34 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:34.127 262347 INFO neutron.agent.dhcp.agent [None req-0e400471-fa0f-44a4-a4da-ef84dc52ed18 - - - - - -] DHCP configuration for ports {'ac73148c-331f-4c9c-ac3a-daf3156fc54d'} is completed#033[00m Dec 2 05:09:34 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:34.167 2 INFO neutron.agent.securitygroups_rpc [None req-808059d3-8bd0-4321-909f-628d45d51793 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']#033[00m Dec 2 05:09:34 localhost nova_compute[281045]: 2025-12-02 10:09:34.222 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:34.221 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:34.223 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:09:34 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses Dec 2 05:09:34 localhost podman[317504]: 2025-12-02 10:09:34.415638039 +0000 UTC m=+0.043849478 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:09:34 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:34 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:09:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s Dec 2 05:09:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:35.225 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:09:35 localhost dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 0 addresses Dec 2 05:09:35 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host Dec 2 05:09:35 localhost podman[317542]: 2025-12-02 10:09:35.520548409 +0000 UTC m=+0.048253043 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:35 localhost dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts Dec 2 05:09:35 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:35.546 2 INFO neutron.agent.securitygroups_rpc [None req-3d05d8f5-1d82-449d-b4e5-f5f672622e53 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:35 localhost nova_compute[281045]: 2025-12-02 10:09:35.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost ovn_controller[153778]: 2025-12-02T10:09:35Z|00150|binding|INFO|Releasing lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 from this chassis (sb_readonly=0) Dec 2 05:09:35 localhost kernel: device tap21d38e5b-83 left promiscuous mode Dec 2 05:09:35 localhost ovn_controller[153778]: 2025-12-02T10:09:35Z|00151|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 down in Southbound Dec 2 05:09:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:35.684 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '204a1137a20e40c995bb9cd512e75a5c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45beaf16-b07c-44e8-bec5-71e8573d4df7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21d38e5b-83d6-443b-a9e2-4f6016ed9773) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:35.686 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21d38e5b-83d6-443b-a9e2-4f6016ed9773 in datapath 1d564796-ac76-41ff-8a1e-6fbd19d356c5 unbound from our chassis#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:35.688 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d564796-ac76-41ff-8a1e-6fbd19d356c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:35.690 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4dde967a-eae1-4cf4-b142-e3df74087533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:35 localhost nova_compute[281045]: 2025-12-02 10:09:35.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses Dec 2 05:09:35 localhost podman[317583]: 2025-12-02 10:09:35.784397876 +0000 UTC m=+0.045274311 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:09:35 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:35 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:35 localhost nova_compute[281045]: 2025-12-02 10:09:35.799 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:36.621 2 INFO neutron.agent.securitygroups_rpc [None req-11576e49-abbc-421e-9ae1-ea6ee8281fd6 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:36 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses Dec 2 05:09:36 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:36 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:36 localhost podman[317619]: 2025-12-02 10:09:36.832418468 +0000 UTC m=+0.061310454 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:09:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:09:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:37.115 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s Dec 2 05:09:37 localhost systemd[1]: tmp-crun.5RmVI1.mount: Deactivated successfully. Dec 2 05:09:37 localhost dnsmasq[316876]: exiting on receipt of SIGTERM Dec 2 05:09:37 localhost podman[317656]: 2025-12-02 10:09:37.317786343 +0000 UTC m=+0.072088807 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:09:37 localhost systemd[1]: libpod-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope: Deactivated successfully. Dec 2 05:09:37 localhost podman[317672]: 2025-12-02 10:09:37.386931937 +0000 UTC m=+0.049540473 container died 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:37 localhost podman[317672]: 2025-12-02 10:09:37.433944401 +0000 UTC m=+0.096553007 container remove 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:09:37 localhost systemd[1]: libpod-conmon-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope: Deactivated successfully. Dec 2 05:09:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:09:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:09:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:37 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:09:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:37.477 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:09:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:37.649 2 INFO neutron.agent.securitygroups_rpc [None req-fb787287-e6b7-452a-9552-33fb0c49fb57 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:37 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:37.829 2 INFO neutron.agent.securitygroups_rpc [None req-f57a8374-1238-48d5-81d1-d11d5ba885ce 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:37 localhost systemd[1]: var-lib-containers-storage-overlay-de1d49012998774e6961c351ee30dc6327212e570cd42de9146733d20d3604cc-merged.mount: Deactivated successfully. Dec 2 05:09:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:37 localhost systemd[1]: run-netns-qdhcp\x2d1d564796\x2dac76\x2d41ff\x2d8a1e\x2d6fbd19d356c5.mount: Deactivated successfully. Dec 2 05:09:37 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:37.874 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:37 localhost nova_compute[281045]: 2025-12-02 10:09:37.876 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:38 localhost dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 0 addresses Dec 2 05:09:38 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host Dec 2 05:09:38 localhost podman[317713]: 2025-12-02 10:09:38.031901815 +0000 UTC m=+0.062031228 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:38 localhost dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts Dec 2 05:09:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:38 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:38.834 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:39 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:39.002 2 INFO neutron.agent.securitygroups_rpc [None req-1ad7ca5c-e344-40e1-8595-888c801ea96b 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 18 KiB/s wr, 13 op/s Dec 2 05:09:39 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:39.263 2 INFO neutron.agent.securitygroups_rpc [None req-cc3286c0-8479-41a4-833f-f53341ebdf18 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:39 localhost dnsmasq[316793]: exiting on receipt of SIGTERM Dec 2 05:09:39 localhost podman[317751]: 2025-12-02 10:09:39.268837181 +0000 UTC m=+0.060452988 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:39 localhost systemd[1]: libpod-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope: Deactivated successfully. Dec 2 05:09:39 localhost podman[317766]: 2025-12-02 10:09:39.333339263 +0000 UTC m=+0.051537654 container died 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:39 localhost nova_compute[281045]: 2025-12-02 10:09:39.381 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:39 localhost systemd[1]: var-lib-containers-storage-overlay-39a3d1465d9dc770bb3a503daf7256c2208118b497345b7f792f0cc2082142ec-merged.mount: Deactivated successfully. Dec 2 05:09:39 localhost podman[317766]: 2025-12-02 10:09:39.411369051 +0000 UTC m=+0.129567412 container cleanup 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:39 localhost systemd[1]: libpod-conmon-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope: Deactivated successfully. Dec 2 05:09:39 localhost podman[317767]: 2025-12-02 10:09:39.430444757 +0000 UTC m=+0.139147117 container remove 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:39 localhost ovn_controller[153778]: 2025-12-02T10:09:39Z|00152|binding|INFO|Releasing lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 from this chassis (sb_readonly=0) Dec 2 05:09:39 localhost kernel: device tap8d7aba05-5e left promiscuous mode Dec 2 05:09:39 localhost ovn_controller[153778]: 2025-12-02T10:09:39Z|00153|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 down in Southbound Dec 2 05:09:39 localhost nova_compute[281045]: 2025-12-02 10:09:39.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:39.451 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '873db74a4a7a4aad823d1b7e8b2d6c26', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d91ccfa3-a134-4f2e-be7a-020d064cc147, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8d7aba05-5eab-44a1-aacc-c2b62f525db1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:39.452 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7aba05-5eab-44a1-aacc-c2b62f525db1 in datapath 669f1b01-3857-4ea6-8083-25e0b2ce70bc unbound from our chassis#033[00m Dec 2 05:09:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:39.454 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 669f1b01-3857-4ea6-8083-25e0b2ce70bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:39.454 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[109e6ea6-e4f9-4f06-ba54-64be7ec6eb14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:39 localhost nova_compute[281045]: 2025-12-02 10:09:39.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:39 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:39.710 262347 INFO neutron.agent.dhcp.agent [None req-1900b73d-78ff-4d7b-9cc8-db694fa934ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:40.221 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:40 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:40.226 2 INFO neutron.agent.securitygroups_rpc [None req-d726f52f-c5d0-4b2e-935e-07d00a13737f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:40 localhost systemd[1]: run-netns-qdhcp\x2d669f1b01\x2d3857\x2d4ea6\x2d8083\x2d25e0b2ce70bc.mount: Deactivated successfully. Dec 2 05:09:40 localhost nova_compute[281045]: 2025-12-02 10:09:40.801 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:40 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:09:40 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:40 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:09:40 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:09:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:41 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:41.067 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 12 op/s Dec 2 05:09:41 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:41 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:41 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:41 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:09:41 localhost nova_compute[281045]: 2025-12-02 10:09:41.759 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:42 localhost openstack_network_exporter[241816]: ERROR 10:09:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:09:42 localhost openstack_network_exporter[241816]: ERROR 10:09:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:42 localhost openstack_network_exporter[241816]: ERROR 10:09:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:42 localhost openstack_network_exporter[241816]: ERROR 10:09:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:09:42 localhost openstack_network_exporter[241816]: Dec 2 05:09:42 localhost openstack_network_exporter[241816]: ERROR 10:09:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:09:42 localhost openstack_network_exporter[241816]: Dec 2 05:09:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:43 localhost nova_compute[281045]: 2025-12-02 10:09:43.003 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s Dec 2 05:09:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:09:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:09:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:09:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:09:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:44 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:44 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:44.616 2 INFO neutron.agent.securitygroups_rpc [None req-fbbd4af2-250f-4ff2-b3ab-e75b109a47fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:44 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:44.682 2 INFO neutron.agent.securitygroups_rpc [None req-26916261-820c-405a-8570-4b6047e10a3c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s Dec 2 05:09:45 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:45.593 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:45Z, description=, device_id=09bd6e36-fd7e-4a01-b1f1-fbe7d4a09c35, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ea175376-684d-4ede-b30d-777d1d743d12, ip_allocation=immediate, mac_address=fa:16:3e:f4:c5:54, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2291, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:45Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:09:45 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:45.752 2 INFO neutron.agent.securitygroups_rpc [None req-5632dc43-e5b5-45de-a516-10b988e48fe8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:45 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:45.798 2 INFO neutron.agent.securitygroups_rpc [None req-50568852-e227-40d9-a94b-d9d972f0134a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:45 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:09:45 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:45 localhost podman[317812]: 2025-12-02 10:09:45.807146902 +0000 UTC m=+0.062236743 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:09:45 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:45 localhost nova_compute[281045]: 2025-12-02 10:09:45.803 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:09:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:09:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:09:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:09:45 localhost podman[317829]: 2025-12-02 10:09:45.938355974 +0000 UTC m=+0.091283056 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:09:45 localhost systemd[1]: tmp-crun.9suV9c.mount: Deactivated successfully. Dec 2 05:09:45 localhost podman[317827]: 2025-12-02 10:09:45.958858034 +0000 UTC m=+0.113933211 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:45 localhost podman[317827]: 2025-12-02 10:09:45.962081633 +0000 UTC m=+0.117156780 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:09:45 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:09:46 localhost podman[317831]: 2025-12-02 10:09:46.002201756 +0000 UTC m=+0.143842791 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:09:46 localhost podman[317828]: 2025-12-02 10:09:46.043761423 +0000 UTC m=+0.197766818 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:09:46 localhost podman[317828]: 2025-12-02 10:09:46.055853325 +0000 UTC m=+0.209858760 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:09:46 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:09:46 localhost podman[317829]: 2025-12-02 10:09:46.072543827 +0000 UTC m=+0.225470899 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:09:46 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:09:46 localhost podman[317831]: 2025-12-02 10:09:46.097594687 +0000 UTC m=+0.239235772 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:09:46 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:09:46 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:46.157 262347 INFO neutron.agent.dhcp.agent [None req-a7307f93-5a1a-419b-b17d-8ec10364d511 - - - - - -] DHCP configuration for ports {'ea175376-684d-4ede-b30d-777d1d743d12'} is completed#033[00m Dec 2 05:09:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s Dec 2 05:09:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:09:47 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:09:47 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:09:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:48 localhost nova_compute[281045]: 2025-12-02 10:09:48.007 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:09:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 65 op/s Dec 2 05:09:50 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:09:50 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:50 localhost podman[317931]: 2025-12-02 10:09:50.378521686 +0000 UTC m=+0.059133518 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:09:50 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:50 localhost systemd[1]: tmp-crun.ht4zPZ.mount: Deactivated successfully. Dec 2 05:09:50 localhost nova_compute[281045]: 2025-12-02 10:09:50.809 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:09:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:09:51 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:09:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 54 op/s Dec 2 05:09:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:09:51 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:51.476 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8:0:1:f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:51.478 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:51.480 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:51.480 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b899ee-0c72-4cff-a99b-2f7f8679961b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:51 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:52 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:52.051 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:51Z, description=, device_id=7059c3fd-a028-4cdb-9894-b6db3dc33369, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e0985da3-b049-4018-a9b6-76ca5bf11bab, ip_allocation=immediate, mac_address=fa:16:3e:14:f8:01, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2338, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:51Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:09:52 localhost systemd[1]: tmp-crun.ESNPIX.mount: Deactivated successfully. Dec 2 05:09:52 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:09:52 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:09:52 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:09:52 localhost podman[317970]: 2025-12-02 10:09:52.263465844 +0000 UTC m=+0.067128934 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:52 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:52.475 262347 INFO neutron.agent.dhcp.agent [None req-c5ced586-d563-45f7-9e01-5939ec796d3f - - - - - -] DHCP configuration for ports {'e0985da3-b049-4018-a9b6-76ca5bf11bab'} is completed#033[00m Dec 2 05:09:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta.tmp' Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta.tmp' to config b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta' Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:09:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.059 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost podman[317992]: 2025-12-02 10:09:53.108132986 +0000 UTC m=+0.114344593 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:09:53 localhost podman[317992]: 2025-12-02 10:09:53.120973061 +0000 UTC m=+0.127184628 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:09:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 57 op/s Dec 2 05:09:53 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:09:53 localhost podman[317993]: 2025-12-02 10:09:53.169876303 +0000 UTC m=+0.175087910 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 05:09:53 localhost podman[317993]: 2025-12-02 10:09:53.207041266 +0000 UTC m=+0.212252863 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, version=9.6) Dec 2 05:09:53 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:09:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:53.244 2 INFO neutron.agent.securitygroups_rpc [None req-2a02d4a7-eedb-47f7-975e-8a697d665d71 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:53.303 2 INFO neutron.agent.securitygroups_rpc [None req-384a8cd4-c502-4296-9a0a-cda4da9440fe 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:53 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:53.866 262347 INFO neutron.agent.linux.ip_lib [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Device tapd313ec5a-74 cannot be used as it has no MAC address#033[00m Dec 2 05:09:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:53.868 2 INFO neutron.agent.securitygroups_rpc [None req-552eb951-c19a-4f29-a133-451809159dee 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost kernel: device tapd313ec5a-74 entered promiscuous mode Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.897 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost ovn_controller[153778]: 2025-12-02T10:09:53Z|00154|binding|INFO|Claiming lport d313ec5a-74ee-4c97-a266-afe79cf4d76a for this chassis. Dec 2 05:09:53 localhost ovn_controller[153778]: 2025-12-02T10:09:53Z|00155|binding|INFO|d313ec5a-74ee-4c97-a266-afe79cf4d76a: Claiming unknown Dec 2 05:09:53 localhost NetworkManager[5967]: [1764670193.9030] manager: (tapd313ec5a-74): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Dec 2 05:09:53 localhost systemd-udevd[318043]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:53.912 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b7e671d1f944c979f6feba0246d3141', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e595e1b-0a18-488d-bb72-ec4f6317b810, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d313ec5a-74ee-4c97-a266-afe79cf4d76a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:53.915 159483 INFO neutron.agent.ovn.metadata.agent [-] Port d313ec5a-74ee-4c97-a266-afe79cf4d76a in datapath 8de9fa50-7037-4f69-a2b1-5be6f609300b bound to our chassis#033[00m Dec 2 05:09:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:53.917 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8de9fa50-7037-4f69-a2b1-5be6f609300b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:53 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:53.918 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[044d274a-a839-428e-9f7a-521609f9d94d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.935 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost ovn_controller[153778]: 2025-12-02T10:09:53Z|00156|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a ovn-installed in OVS Dec 2 05:09:53 localhost ovn_controller[153778]: 2025-12-02T10:09:53Z|00157|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a up in Southbound Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.942 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:53.953 2 INFO neutron.agent.securitygroups_rpc [None req-793d7f6f-bcaa-4aba-a1ec-f239eb834fe6 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:53 localhost nova_compute[281045]: 2025-12-02 10:09:53.966 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:53 localhost journal[229262]: ethtool ioctl error on tapd313ec5a-74: No such device Dec 2 05:09:54 localhost nova_compute[281045]: 2025-12-02 10:09:53.998 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:09:54 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:09:54 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:09:54 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:54.534 2 INFO neutron.agent.securitygroups_rpc [None req-ecc73d10-d9a3-477f-859a-88e3d0a4a336 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:09:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:09:54 localhost nova_compute[281045]: 2025-12-02 10:09:54.601 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:54 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:54 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:54 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:09:54 localhost podman[318115]: Dec 2 05:09:54 localhost podman[318115]: 2025-12-02 10:09:54.792585994 +0000 UTC m=+0.101802840 container create 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:54 localhost podman[318115]: 2025-12-02 10:09:54.737322825 +0000 UTC m=+0.046539731 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:54 localhost systemd[1]: Started libpod-conmon-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope. Dec 2 05:09:54 localhost systemd[1]: Started libcrun container. Dec 2 05:09:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9135aa181504ec852eaaa2d9517fcf20a06ac4cf2b278c7d3159af034da279/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:54 localhost podman[318115]: 2025-12-02 10:09:54.86993741 +0000 UTC m=+0.179154236 container init 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:54 localhost podman[318115]: 2025-12-02 10:09:54.878163213 +0000 UTC m=+0.187380069 container start 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:54 localhost dnsmasq[318133]: started, version 2.85 cachesize 150 Dec 2 05:09:54 localhost dnsmasq[318133]: DNS service limited to local subnets Dec 2 05:09:54 localhost dnsmasq[318133]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:54 localhost dnsmasq[318133]: warning: no upstream servers configured Dec 2 05:09:54 localhost dnsmasq-dhcp[318133]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:54 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 0 addresses Dec 2 05:09:54 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:54 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:54.944 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=35a195b4-4893-468b-bebd-aa243804f4fb, ip_allocation=immediate, mac_address=fa:16:3e:f0:2a:16, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1912580586, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:51Z, description=, dns_domain=, id=8de9fa50-7037-4f69-a2b1-5be6f609300b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1209960196, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7693, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2340, status=ACTIVE, subnets=['9fa31bd1-0c42-435a-80e7-1590e36d6d8c'], tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2349, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:53Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b#033[00m Dec 2 05:09:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:55.008 2 INFO neutron.agent.securitygroups_rpc [None req-c4292fab-d4f2-45ec-8373-3372677610e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.041 262347 INFO neutron.agent.dhcp.agent [None req-4b6596ae-0cd4-4590-b49d-985494e1b502 - - - - - -] DHCP configuration for ports {'c0ec0206-98a8-40dc-a55b-75dc959f678e'} is completed#033[00m Dec 2 05:09:55 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses Dec 2 05:09:55 localhost podman[318151]: 2025-12-02 10:09:55.115871037 +0000 UTC m=+0.043882580 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s Dec 2 05:09:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:55.252 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.263 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=e3969589-82f8-477d-9278-7806f45e965e, ip_allocation=immediate, mac_address=fa:16:3e:73:da:f5, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1381010091, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:51Z, description=, dns_domain=, id=8de9fa50-7037-4f69-a2b1-5be6f609300b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1209960196, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7693, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2340, status=ACTIVE, subnets=['9fa31bd1-0c42-435a-80e7-1590e36d6d8c'], tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2351, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:53Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.280 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.281 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.282 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:55.290 2 INFO neutron.agent.securitygroups_rpc [None req-d1fab671-1814-41db-9614-65c239fa9e70 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.383 262347 INFO neutron.agent.dhcp.agent [None req-4856648b-e92e-4470-b926-c5749cdbbc49 - - - - - -] DHCP configuration for ports {'35a195b4-4893-468b-bebd-aa243804f4fb'} is completed#033[00m Dec 2 05:09:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:55.431 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:55 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 2 addresses Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:55 localhost podman[318188]: 2025-12-02 10:09:55.45608618 +0000 UTC m=+0.054286428 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.705 262347 INFO neutron.agent.dhcp.agent [None req-2d882893-811c-4f4d-8273-8c23f848d87b - - - - - -] DHCP configuration for ports {'e3969589-82f8-477d-9278-7806f45e965e'} is completed#033[00m Dec 2 05:09:55 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:55 localhost podman[318225]: 2025-12-02 10:09:55.791077844 +0000 UTC m=+0.061169421 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:55 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:55 localhost systemd[1]: tmp-crun.QQ7El9.mount: Deactivated successfully. Dec 2 05:09:55 localhost nova_compute[281045]: 2025-12-02 10:09:55.811 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.940 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=35a195b4-4893-468b-bebd-aa243804f4fb, ip_allocation=immediate, mac_address=fa:16:3e:f0:2a:16, name=tempest-new-port-name-454689627, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2349, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:55Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.957 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.958 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.959 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Dec 2 05:09:56 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses Dec 2 05:09:56 localhost podman[318261]: 2025-12-02 10:09:56.132583057 +0000 UTC m=+0.059362205 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:09:56 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:56 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:56 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:56.308 2 INFO neutron.agent.securitygroups_rpc [None req-2709b5dd-db11-4508-a989-29103dd3702e 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:56 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:56.340 262347 INFO neutron.agent.dhcp.agent [None req-d45bc57f-25ab-431f-8684-fcced6ca090d - - - - - -] DHCP configuration for ports {'35a195b4-4893-468b-bebd-aa243804f4fb'} is completed#033[00m Dec 2 05:09:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:09:56 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc6380cd-d1fe-41c0-9f77-54a6bc7687ef' of type subvolume Dec 2 05:09:56 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:09:56.513+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc6380cd-d1fe-41c0-9f77-54a6bc7687ef' of type subvolume Dec 2 05:09:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "force": true, "format": "json"}]: dispatch Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef'' moved to trashcan Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:09:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < "" Dec 2 05:09:56 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:56.648 2 INFO neutron.agent.securitygroups_rpc [None req-4752b673-ecc5-46d9-8169-f464ead4adc9 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:56 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:56.667 2 INFO neutron.agent.securitygroups_rpc [None req-237e1b13-6077-411f-87b4-3c14ff8061ce 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:56 localhost podman[318298]: 2025-12-02 10:09:56.848459233 +0000 UTC m=+0.060878491 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:09:56 localhost dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 0 addresses Dec 2 05:09:56 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host Dec 2 05:09:56 localhost dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts Dec 2 05:09:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s Dec 2 05:09:57 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:57.442 2 INFO neutron.agent.securitygroups_rpc [None req-84ee1250-9ce8-4943-9e17-d2eb70522c28 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:09:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:57 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:57.651 2 INFO neutron.agent.securitygroups_rpc [None req-f449fe39-4274-4abb-aff3-e3ba219c9fe2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:09:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:09:57 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:09:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:09:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:57 localhost systemd[1]: tmp-crun.QqoLvK.mount: Deactivated successfully. Dec 2 05:09:57 localhost dnsmasq[318133]: exiting on receipt of SIGTERM Dec 2 05:09:57 localhost podman[318335]: 2025-12-02 10:09:57.759747244 +0000 UTC m=+0.059467628 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:57 localhost systemd[1]: libpod-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope: Deactivated successfully. Dec 2 05:09:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:09:57 localhost podman[318357]: 2025-12-02 10:09:57.833575172 +0000 UTC m=+0.051143762 container died 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:09:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:57 localhost systemd[1]: var-lib-containers-storage-overlay-2a9135aa181504ec852eaaa2d9517fcf20a06ac4cf2b278c7d3159af034da279-merged.mount: Deactivated successfully. Dec 2 05:09:57 localhost podman[318357]: 2025-12-02 10:09:57.934876645 +0000 UTC m=+0.152445245 container remove 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:57 localhost systemd[1]: libpod-conmon-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope: Deactivated successfully. Dec 2 05:09:57 localhost nova_compute[281045]: 2025-12-02 10:09:57.948 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:57 localhost kernel: device tapd313ec5a-74 left promiscuous mode Dec 2 05:09:57 localhost ovn_controller[153778]: 2025-12-02T10:09:57Z|00158|binding|INFO|Releasing lport d313ec5a-74ee-4c97-a266-afe79cf4d76a from this chassis (sb_readonly=0) Dec 2 05:09:57 localhost ovn_controller[153778]: 2025-12-02T10:09:57Z|00159|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a down in Southbound Dec 2 05:09:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:57.964 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b7e671d1f944c979f6feba0246d3141', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e595e1b-0a18-488d-bb72-ec4f6317b810, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d313ec5a-74ee-4c97-a266-afe79cf4d76a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:57.966 159483 INFO neutron.agent.ovn.metadata.agent [-] Port d313ec5a-74ee-4c97-a266-afe79cf4d76a in datapath 8de9fa50-7037-4f69-a2b1-5be6f609300b unbound from our chassis#033[00m Dec 2 05:09:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:57.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8de9fa50-7037-4f69-a2b1-5be6f609300b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:09:57.968 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[bc27e5ce-4139-4c6c-950e-33e0fbf87067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:57 localhost nova_compute[281045]: 2025-12-02 10:09:57.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:57 localhost systemd[1]: run-netns-qdhcp\x2d8de9fa50\x2d7037\x2d4f69\x2da2b1\x2d5be6f609300b.mount: Deactivated successfully. Dec 2 05:09:57 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:57.994 262347 INFO neutron.agent.dhcp.agent [None req-7d4c5c76-9bf3-46e0-8245-b5d6024546a9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:58 localhost nova_compute[281045]: 2025-12-02 10:09:58.063 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:58 localhost neutron_sriov_agent[255428]: 2025-12-02 10:09:58.480 2 INFO neutron.agent.securitygroups_rpc [None req-c9840334-22ed-4fcf-9fb8-d440584d45ac 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:58 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:09:58.573 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:58 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:59 localhost nova_compute[281045]: 2025-12-02 10:09:59.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 30 KiB/s wr, 9 op/s Dec 2 05:09:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta.tmp' Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta.tmp' to config b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta' Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:09:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:09:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:10:00 localhost podman[318379]: 2025-12-02 10:10:00.074704704 +0000 UTC m=+0.081316879 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:00 localhost podman[318379]: 2025-12-02 10:10:00.086970042 +0000 UTC m=+0.093582217 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd) Dec 2 05:10:00 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:10:00 localhost nova_compute[281045]: 2025-12-02 10:10:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:00 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:00.707 2 INFO neutron.agent.securitygroups_rpc [None req-3e03e95a-f561-4345-b792-65b4ec75916c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:00 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 05:10:00 localhost nova_compute[281045]: 2025-12-02 10:10:00.813 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:10:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 22 KiB/s wr, 6 op/s Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:01 localhost sshd[318399]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:10:01 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:01 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:01 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:01 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:02 localhost nova_compute[281045]: 2025-12-02 10:10:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:03 localhost nova_compute[281045]: 2025-12-02 10:10:03.017 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost nova_compute[281045]: 2025-12-02 10:10:03.065 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 33 KiB/s wr, 9 op/s Dec 2 05:10:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:10:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05ca7661-f391-4234-9c50-a2000ddc14bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05ca7661-f391-4234-9c50-a2000ddc14bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:10:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:10:03.213+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ca7661-f391-4234-9c50-a2000ddc14bd' of type subvolume Dec 2 05:10:03 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ca7661-f391-4234-9c50-a2000ddc14bd' of type subvolume Dec 2 05:10:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "force": true, "format": "json"}]: dispatch Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd'' moved to trashcan Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:10:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < "" Dec 2 05:10:03 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:03.549 2 INFO neutron.agent.securitygroups_rpc [None req-7ff6a690-1608-4241-962d-cf0eb5f2eb30 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:10:03 localhost podman[239757]: time="2025-12-02T10:10:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:10:03 localhost podman[239757]: @ - - [02/Dec/2025:10:10:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:10:03 localhost podman[239757]: @ - - [02/Dec/2025:10:10:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19209 "" "Go-http-client/1.1" Dec 2 05:10:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:10:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:04 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:04 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:04.408 2 INFO neutron.agent.securitygroups_rpc [None req-90a76f3a-a979-402e-97fd-700e856a8199 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:10:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:04 localhost nova_compute[281045]: 2025-12-02 10:10:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:04.927 2 INFO neutron.agent.securitygroups_rpc [None req-c77326cb-1074-4872-8ee0-28f281df7dfe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s Dec 2 05:10:05 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:05.447 2 INFO neutron.agent.securitygroups_rpc [None req-9cd60b66-d893-4669-ab17-1eefaaf90d0c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.545 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.564 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.564 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.565 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.566 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:10:05 localhost nova_compute[281045]: 2025-12-02 10:10:05.816 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:10:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3363842838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.081 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:10:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:06.091 2 INFO neutron.agent.securitygroups_rpc [None req-9685cf0d-187e-491c-a3b3-f6b6113116e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.258 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11513MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.320 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.321 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.342 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:10:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:06.721 2 INFO neutron.agent.securitygroups_rpc [None req-3503b005-9d9f-48d0-a8b4-a51ac4fa455f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:10:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4072896173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.856 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:10:06 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:06.864 262347 INFO neutron.agent.linux.ip_lib [None req-db5016fe-fe30-43be-9878-19fb858defeb - - - - - -] Device tapdcdae0ab-51 cannot be used as it has no MAC address#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.865 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.884 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost kernel: device tapdcdae0ab-51 entered promiscuous mode Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.891 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost NetworkManager[5967]: [1764670206.8917] manager: (tapdcdae0ab-51): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Dec 2 05:10:06 localhost systemd-udevd[318455]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:06 localhost ovn_controller[153778]: 2025-12-02T10:10:06Z|00160|binding|INFO|Claiming lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 for this chassis. Dec 2 05:10:06 localhost ovn_controller[153778]: 2025-12-02T10:10:06Z|00161|binding|INFO|dcdae0ab-51b5-4def-b7cc-5be762ad32e1: Claiming unknown Dec 2 05:10:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:10:06 Dec 2 05:10:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:10:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:10:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', 'backups', 'volumes', 'manila_data', '.mgr', 'images', 'vms'] Dec 2 05:10:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.917 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost ovn_controller[153778]: 2025-12-02T10:10:06Z|00162|binding|INFO|Setting lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 ovn-installed in OVS Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.921 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost journal[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.946 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:10:06 localhost nova_compute[281045]: 2025-12-02 10:10:06.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:10:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:10:07 localhost ovn_controller[153778]: 2025-12-02T10:10:07Z|00163|binding|INFO|Setting lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 up in Southbound Dec 2 05:10:07 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:07.100 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c55ed48-bff3-4947-9ef2-3c1cfe7583ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dcdae0ab-51b5-4def-b7cc-5be762ad32e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:07.102 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dcdae0ab-51b5-4def-b7cc-5be762ad32e1 in datapath 166c1533-b4e1-407e-b8de-28630b01d9d5 bound to our chassis#033[00m Dec 2 05:10:07 localhost nova_compute[281045]: 2025-12-02 10:10:07.103 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:07.104 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 659d58ac-5e35-45bd-9fb2-a533ad9914c5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:07.105 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 166c1533-b4e1-407e-b8de-28630b01d9d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:07.105 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5d22b721-f92d-4fde-a40e-68787bd53c6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s Dec 2 05:10:07 localhost nova_compute[281045]: 2025-12-02 10:10:07.142 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:10:07 localhost nova_compute[281045]: 2025-12-02 10:10:07.143 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:10:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 9.187648310999442e-05 of space, bias 4.0, pg target 0.07313368055555555 quantized to 16 (current 16) Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:10:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:10:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:07 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:10:07 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:08 localhost podman[318526]: Dec 2 05:10:08 localhost podman[318526]: 2025-12-02 10:10:08.026378304 +0000 UTC m=+0.064946368 container create fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:08 localhost podman[318526]: 2025-12-02 10:10:07.987660134 +0000 UTC m=+0.026228198 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:08 localhost systemd[1]: Started libpod-conmon-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope. Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.125 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.126 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.126 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:10:08 localhost systemd[1]: tmp-crun.ZnNKlc.mount: Deactivated successfully. Dec 2 05:10:08 localhost systemd[1]: Started libcrun container. Dec 2 05:10:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b369f80c954f4690246e93f2906d95212a5da2cdad75d836922a3269ac60754b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:08 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:08 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:08 localhost podman[318526]: 2025-12-02 10:10:08.152371755 +0000 UTC m=+0.190939819 container init fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.159 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.160 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.160 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:08 localhost podman[318526]: 2025-12-02 10:10:08.161639179 +0000 UTC m=+0.200207253 container start fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:10:08 localhost dnsmasq[318545]: started, version 2.85 cachesize 150 Dec 2 05:10:08 localhost dnsmasq[318545]: DNS service limited to local subnets Dec 2 05:10:08 localhost dnsmasq[318545]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:08 localhost dnsmasq[318545]: warning: no upstream servers configured Dec 2 05:10:08 localhost dnsmasq-dhcp[318545]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 2 05:10:08 localhost dnsmasq[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/addn_hosts - 0 addresses Dec 2 05:10:08 localhost dnsmasq-dhcp[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/host Dec 2 05:10:08 localhost dnsmasq-dhcp[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/opts Dec 2 05:10:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:08.395 262347 INFO neutron.agent.dhcp.agent [None req-74f0b61f-c27c-4c47-80d9-385cbdcf0d09 - - - - - -] DHCP configuration for ports {'08c7a173-e070-49da-a327-804bad2b36c4'} is completed#033[00m Dec 2 05:10:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:08 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:08 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:08.495 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 659d58ac-5e35-45bd-9fb2-a533ad9914c5 with type ""#033[00m Dec 2 05:10:08 localhost ovn_controller[153778]: 2025-12-02T10:10:08Z|00164|binding|INFO|Removing iface tapdcdae0ab-51 ovn-installed in OVS Dec 2 05:10:08 localhost ovn_controller[153778]: 2025-12-02T10:10:08Z|00165|binding|INFO|Removing lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 ovn-installed in OVS Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.497 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.501 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:08.503 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c55ed48-bff3-4947-9ef2-3c1cfe7583ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dcdae0ab-51b5-4def-b7cc-5be762ad32e1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:08.505 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dcdae0ab-51b5-4def-b7cc-5be762ad32e1 in datapath 166c1533-b4e1-407e-b8de-28630b01d9d5 unbound from our chassis#033[00m Dec 2 05:10:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:08.507 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 166c1533-b4e1-407e-b8de-28630b01d9d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:08 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:08.507 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e68ea0e5-7ad0-408b-a240-22c033063f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:08 localhost dnsmasq[318545]: exiting on receipt of SIGTERM Dec 2 05:10:08 localhost podman[318563]: 2025-12-02 10:10:08.567510181 +0000 UTC m=+0.049074240 container kill fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:10:08 localhost systemd[1]: libpod-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope: Deactivated successfully. Dec 2 05:10:08 localhost podman[318577]: 2025-12-02 10:10:08.615440143 +0000 UTC m=+0.036736579 container died fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:10:08 localhost podman[318577]: 2025-12-02 10:10:08.639365518 +0000 UTC m=+0.060661904 container cleanup fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:10:08 localhost systemd[1]: libpod-conmon-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope: Deactivated successfully. Dec 2 05:10:08 localhost podman[318578]: 2025-12-02 10:10:08.697815464 +0000 UTC m=+0.112485417 container remove fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.711 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:08 localhost kernel: device tapdcdae0ab-51 left promiscuous mode Dec 2 05:10:08 localhost nova_compute[281045]: 2025-12-02 10:10:08.727 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:08 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:08.933 262347 INFO neutron.agent.dhcp.agent [None req-ee747dfa-2918-431f-b274-20caf043f838 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:09 localhost systemd[1]: var-lib-containers-storage-overlay-b369f80c954f4690246e93f2906d95212a5da2cdad75d836922a3269ac60754b-merged.mount: Deactivated successfully. Dec 2 05:10:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:09 localhost systemd[1]: run-netns-qdhcp\x2d166c1533\x2db4e1\x2d407e\x2db8de\x2d28630b01d9d5.mount: Deactivated successfully. Dec 2 05:10:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.5 KiB/s rd, 1.4 MiB/s wr, 24 op/s Dec 2 05:10:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:09.148 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:09 localhost ceph-mgr[287188]: [devicehealth INFO root] Check health Dec 2 05:10:09 localhost nova_compute[281045]: 2025-12-02 10:10:09.576 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:10 localhost nova_compute[281045]: 2025-12-02 10:10:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:10 localhost nova_compute[281045]: 2025-12-02 10:10:10.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:10:10 localhost nova_compute[281045]: 2025-12-02 10:10:10.819 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.4 KiB/s rd, 1.4 MiB/s wr, 20 op/s Dec 2 05:10:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:10:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:11 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:11.926 262347 INFO neutron.agent.linux.ip_lib [None req-f7e42905-5b42-458e-a385-8ecab4999b16 - - - - - -] Device tapdad34ce7-27 cannot be used as it has no MAC address#033[00m Dec 2 05:10:11 localhost nova_compute[281045]: 2025-12-02 10:10:11.997 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost kernel: device tapdad34ce7-27 entered promiscuous mode Dec 2 05:10:12 localhost NetworkManager[5967]: [1764670212.0035] manager: (tapdad34ce7-27): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.003 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost ovn_controller[153778]: 2025-12-02T10:10:12Z|00166|binding|INFO|Claiming lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 for this chassis. Dec 2 05:10:12 localhost ovn_controller[153778]: 2025-12-02T10:10:12Z|00167|binding|INFO|dad34ce7-27fc-44a3-9b51-6f8e06622f64: Claiming unknown Dec 2 05:10:12 localhost systemd-udevd[318616]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.014 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=421d26f0-533b-46ec-b4bf-90e9b385208d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dad34ce7-27fc-44a3-9b51-6f8e06622f64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.016 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dad34ce7-27fc-44a3-9b51-6f8e06622f64 in datapath 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 bound to our chassis#033[00m Dec 2 05:10:12 localhost ovn_controller[153778]: 2025-12-02T10:10:12Z|00168|binding|INFO|Setting lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 ovn-installed in OVS Dec 2 05:10:12 localhost ovn_controller[153778]: 2025-12-02T10:10:12Z|00169|binding|INFO|Setting lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 up in Southbound Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.017 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port e5695a05-13ac-4749-a287-15f3726545f7 with type ""#033[00m Dec 2 05:10:12 localhost ovn_controller[153778]: 2025-12-02T10:10:12Z|00170|binding|INFO|Removing iface tapdad34ce7-27 ovn-installed in OVS Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.018 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=421d26f0-533b-46ec-b4bf-90e9b385208d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dad34ce7-27fc-44a3-9b51-6f8e06622f64) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.018 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.024 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.025 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c0777391-65a8-4e93-8fa3-17cc89c53ec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.026 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dad34ce7-27fc-44a3-9b51-6f8e06622f64 in datapath 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 unbound from our chassis#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.026 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:12 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:12.027 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2973217b-c11b-4f21-b2fe-8b456b52d504]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.027 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.036 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.066 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost journal[229262]: ethtool ioctl error on tapdad34ce7-27: No such device Dec 2 05:10:12 localhost nova_compute[281045]: 2025-12-02 10:10:12.094 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:12 localhost openstack_network_exporter[241816]: ERROR 10:10:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:12 localhost openstack_network_exporter[241816]: ERROR 10:10:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:12 localhost openstack_network_exporter[241816]: ERROR 10:10:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:10:12 localhost openstack_network_exporter[241816]: ERROR 10:10:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:10:12 localhost openstack_network_exporter[241816]: Dec 2 05:10:12 localhost openstack_network_exporter[241816]: ERROR 10:10:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:10:12 localhost openstack_network_exporter[241816]: Dec 2 05:10:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:12 localhost podman[318688]: 2025-12-02 10:10:12.827719152 +0000 UTC m=+0.036230914 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:12 localhost podman[318688]: Dec 2 05:10:12 localhost podman[318688]: 2025-12-02 10:10:12.968402575 +0000 UTC m=+0.176914347 container create 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:10:13 localhost systemd[1]: Started libpod-conmon-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope. Dec 2 05:10:13 localhost systemd[1]: Started libcrun container. Dec 2 05:10:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c974d07664f24ea733e7f13eaeb626065a77246b8e684500f0109ffb11044aca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:13 localhost podman[318688]: 2025-12-02 10:10:13.060224966 +0000 UTC m=+0.268736729 container init 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:10:13 localhost dnsmasq[318706]: started, version 2.85 cachesize 150 Dec 2 05:10:13 localhost dnsmasq[318706]: DNS service limited to local subnets Dec 2 05:10:13 localhost dnsmasq[318706]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:13 localhost dnsmasq[318706]: warning: no upstream servers configured Dec 2 05:10:13 localhost dnsmasq-dhcp[318706]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:13 localhost podman[318688]: 2025-12-02 10:10:13.097986896 +0000 UTC m=+0.306498698 container start 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:10:13 localhost dnsmasq[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/addn_hosts - 0 addresses Dec 2 05:10:13 localhost dnsmasq-dhcp[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/host Dec 2 05:10:13 localhost dnsmasq-dhcp[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/opts Dec 2 05:10:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 12 MiB/s wr, 40 op/s Dec 2 05:10:13 localhost nova_compute[281045]: 2025-12-02 10:10:13.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.250 262347 INFO neutron.agent.dhcp.agent [None req-b8e8c85c-cd4f-4434-b1ad-3d359266222c - - - - - -] DHCP configuration for ports {'c8067e90-d871-4aed-96dc-03c10e6871ff'} is completed#033[00m Dec 2 05:10:13 localhost dnsmasq[318706]: exiting on receipt of SIGTERM Dec 2 05:10:13 localhost podman[318722]: 2025-12-02 10:10:13.396151987 +0000 UTC m=+0.054899907 container kill 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:10:13 localhost systemd[1]: libpod-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope: Deactivated successfully. Dec 2 05:10:13 localhost podman[318736]: 2025-12-02 10:10:13.448705963 +0000 UTC m=+0.043767976 container died 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 05:10:13 localhost podman[318736]: 2025-12-02 10:10:13.547710935 +0000 UTC m=+0.142772938 container cleanup 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:10:13 localhost systemd[1]: libpod-conmon-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope: Deactivated successfully. Dec 2 05:10:13 localhost podman[318743]: 2025-12-02 10:10:13.571091233 +0000 UTC m=+0.152227198 container remove 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:13 localhost nova_compute[281045]: 2025-12-02 10:10:13.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:13 localhost kernel: device tapdad34ce7-27 left promiscuous mode Dec 2 05:10:13 localhost nova_compute[281045]: 2025-12-02 10:10:13.603 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.626 262347 INFO neutron.agent.dhcp.agent [None req-bd26d58b-e335-43f0-aa05-c75946207c8d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:13 localhost nova_compute[281045]: 2025-12-02 10:10:13.626 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.627 262347 INFO neutron.agent.dhcp.agent [None req-bd26d58b-e335-43f0-aa05-c75946207c8d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:13 localhost systemd[1]: tmp-crun.2iLXZE.mount: Deactivated successfully. Dec 2 05:10:13 localhost systemd[1]: var-lib-containers-storage-overlay-c974d07664f24ea733e7f13eaeb626065a77246b8e684500f0109ffb11044aca-merged.mount: Deactivated successfully. Dec 2 05:10:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:13 localhost systemd[1]: run-netns-qdhcp\x2d9a4bcf0d\x2d86b2\x2d4c41\x2da908\x2d20a95f0c63b6.mount: Deactivated successfully. Dec 2 05:10:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:14.384 2 INFO neutron.agent.securitygroups_rpc [None req-aa29d14d-f8b7-4441-acc0-85287ab48c6d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:14 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:10:14 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:15 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:15.078 2 INFO neutron.agent.securitygroups_rpc [None req-976c47c9-ae22-4daa-9b62-4c3bf838f3e2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:15 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:15.590 2 INFO neutron.agent.securitygroups_rpc [None req-90e76c3f-20a5-47a4-903d-78b983867e31 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost nova_compute[281045]: 2025-12-02 10:10:15.820 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:15 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:15.916 2 INFO neutron.agent.securitygroups_rpc [None req-5542fe4c-166b-4ffa-972c-029f68af7f10 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.090 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] Synchronizing state#033[00m Dec 2 05:10:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:10:16 localhost podman[318765]: 2025-12-02 10:10:16.097870423 +0000 UTC m=+0.089499051 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:10:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:10:16 localhost podman[318765]: 2025-12-02 10:10:16.132983251 +0000 UTC m=+0.124611889 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Dec 2 05:10:16 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:10:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:10:16 localhost podman[318783]: 2025-12-02 10:10:16.244445796 +0000 UTC m=+0.139555439 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:10:16 localhost podman[318783]: 2025-12-02 10:10:16.256059453 +0000 UTC m=+0.151169076 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.268 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.268 262347 INFO neutron.agent.dhcp.agent [-] Starting network d9f6bbb9-ad7f-4259-9522-4dab6766c81c dhcp configuration#033[00m Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.269 262347 INFO neutron.agent.dhcp.agent [-] Finished network d9f6bbb9-ad7f-4259-9522-4dab6766c81c dhcp configuration#033[00m Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.269 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] Synchronizing state complete#033[00m Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.270 262347 INFO neutron.agent.dhcp.agent [None req-2567fcc3-49f9-4f19-b70e-cd0be5e905e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:16 localhost podman[318805]: 2025-12-02 10:10:16.291270555 +0000 UTC m=+0.080722461 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:10:16 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:10:16 localhost podman[318805]: 2025-12-02 10:10:16.356316414 +0000 UTC m=+0.145768290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:10:16 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:10:16 localhost podman[318784]: 2025-12-02 10:10:16.358575303 +0000 UTC m=+0.244926787 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:10:16 localhost podman[318784]: 2025-12-02 10:10:16.438393955 +0000 UTC m=+0.324745479 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:10:16 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:10:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.908 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s Dec 2 05:10:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:10:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:18 localhost nova_compute[281045]: 2025-12-02 10:10:18.228 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:18 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:18.245 2 INFO neutron.agent.securitygroups_rpc [None req-46c6c36b-2a47-445c-9836-d1e79e5b14a9 8d2b383649fa45f2821f6e290127374a 84fd536b8b4d489f944ed3e4bbfaeb5b - - default default] Security group rule updated ['d6dcbb7b-b610-4062-87d4-37eec03c1ecf']#033[00m Dec 2 05:10:18 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:10:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:19 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:19 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:19 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch Dec 2 05:10:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 29 KiB/s rd, 24 MiB/s wr, 54 op/s Dec 2 05:10:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:20.292 2 INFO neutron.agent.securitygroups_rpc [None req-0a3dbc5e-28c6-4790-aae5-682551e66674 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:20 localhost nova_compute[281045]: 2025-12-02 10:10:20.823 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 20 KiB/s rd, 23 MiB/s wr, 37 op/s Dec 2 05:10:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:21 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:10:21 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:22 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:22 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:22 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:22 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "format": "json"}]: dispatch Dec 2 05:10:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:23 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.032 262347 INFO neutron.agent.linux.ip_lib [None req-56d2e12e-fb73-49f7-ad4d-e7c6f8822ab2 - - - - - -] Device tap8e7a6388-06 cannot be used as it has no MAC address#033[00m Dec 2 05:10:23 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.083 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:22Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d41f53db-06e6-4a78-b76c-e87709f28b83, ip_allocation=immediate, mac_address=fa:16:3e:d8:74:e9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2495, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:10:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.133 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost kernel: device tap8e7a6388-06 entered promiscuous mode Dec 2 05:10:23 localhost NetworkManager[5967]: [1764670223.1398] manager: (tap8e7a6388-06): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.140 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost ovn_controller[153778]: 2025-12-02T10:10:23Z|00171|binding|INFO|Claiming lport 8e7a6388-0616-4036-bc8b-c45817966af9 for this chassis. Dec 2 05:10:23 localhost ovn_controller[153778]: 2025-12-02T10:10:23Z|00172|binding|INFO|8e7a6388-0616-4036-bc8b-c45817966af9: Claiming unknown Dec 2 05:10:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 27 KiB/s rd, 33 MiB/s wr, 54 op/s Dec 2 05:10:23 localhost systemd-udevd[318865]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:23 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:23.154 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7326c3837b4427191aafcff504110ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb1a8e80-528c-4bda-8d5b-06a577344504, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e7a6388-0616-4036-bc8b-c45817966af9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:23 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:23.156 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7a6388-0616-4036-bc8b-c45817966af9 in datapath fc2e8456-8064-45d4-b986-3bd5157209ba bound to our chassis#033[00m Dec 2 05:10:23 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:23.158 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9fe7ca25-963a-4e60-ab5d-063d04cd1fbe IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:10:23 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:23.158 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2e8456-8064-45d4-b986-3bd5157209ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:23 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:23.159 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[50b6e7b6-c835-47a9-bf43-a0e722f49097]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:23 localhost ovn_controller[153778]: 2025-12-02T10:10:23Z|00173|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 ovn-installed in OVS Dec 2 05:10:23 localhost ovn_controller[153778]: 2025-12-02T10:10:23Z|00174|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 up in Southbound Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.160 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.161 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.179 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.209 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost nova_compute[281045]: 2025-12-02 10:10:23.230 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:10:23 localhost podman[318873]: 2025-12-02 10:10:23.303825628 +0000 UTC m=+0.133193193 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:10:23 localhost systemd[1]: tmp-crun.W7Dkry.mount: Deactivated successfully. Dec 2 05:10:23 localhost podman[318873]: 2025-12-02 10:10:23.315689372 +0000 UTC m=+0.145056937 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:10:23 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:10:23 localhost podman[318895]: 2025-12-02 10:10:23.379795792 +0000 UTC m=+0.146409869 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:10:23 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:10:23 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:10:23 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:10:23 localhost podman[318918]: 2025-12-02 10:10:23.369498977 +0000 UTC m=+0.059638985 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:10:23 localhost podman[318918]: 2025-12-02 10:10:23.47702595 +0000 UTC m=+0.167165958 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9) Dec 2 05:10:23 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:10:23 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.649 262347 INFO neutron.agent.dhcp.agent [None req-d5c42fa7-b5be-4a22-8211-cbb0209eee28 - - - - - -] DHCP configuration for ports {'d41f53db-06e6-4a78-b76c-e87709f28b83'} is completed#033[00m Dec 2 05:10:24 localhost nova_compute[281045]: 2025-12-02 10:10:24.038 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:24 localhost podman[318991]: Dec 2 05:10:24 localhost podman[318991]: 2025-12-02 10:10:24.062634793 +0000 UTC m=+0.095694141 container create bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:10:24 localhost systemd[1]: Started libpod-conmon-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope. Dec 2 05:10:24 localhost podman[318991]: 2025-12-02 10:10:24.014230166 +0000 UTC m=+0.047289594 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:24 localhost systemd[1]: Started libcrun container. Dec 2 05:10:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158c3d2b4f5070bc0f9feccb69eab6266c44d72576ad850b561465969ca2e04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:24 localhost podman[318991]: 2025-12-02 10:10:24.138720271 +0000 UTC m=+0.171779659 container init bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:24 localhost podman[318991]: 2025-12-02 10:10:24.150028548 +0000 UTC m=+0.183087906 container start bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:10:24 localhost dnsmasq[319009]: started, version 2.85 cachesize 150 Dec 2 05:10:24 localhost dnsmasq[319009]: DNS service limited to local subnets Dec 2 05:10:24 localhost dnsmasq[319009]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:24 localhost dnsmasq[319009]: warning: no upstream servers configured Dec 2 05:10:24 localhost dnsmasq-dhcp[319009]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:10:24 localhost dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 0 addresses Dec 2 05:10:24 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host Dec 2 05:10:24 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts Dec 2 05:10:24 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:24.325 262347 INFO neutron.agent.dhcp.agent [None req-093141b0-8f58-450d-bb4d-3a3097e1d39a - - - - - -] DHCP configuration for ports {'15bd109d-23f2-4220-b6ed-d39b4a974041'} is completed#033[00m Dec 2 05:10:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:10:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:10:24 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:24 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:24 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:24 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:24.932 2 INFO neutron.agent.securitygroups_rpc [None req-3ae6da31-b902-4566-8baa-11e094d2ee12 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s Dec 2 05:10:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:25.787 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:24Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6f85e6fe-549a-4116-a1ab-84bd28711afb, ip_allocation=immediate, mac_address=fa:16:3e:b7:59:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:18Z, description=, dns_domain=, id=fc2e8456-8064-45d4-b986-3bd5157209ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2082968740-network, port_security_enabled=True, project_id=f7326c3837b4427191aafcff504110ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2467, status=ACTIVE, subnets=['2cf7d7fa-d01f-4f91-a974-bccbb7a5e8f6'], tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:20Z, vlan_transparent=None, network_id=fc2e8456-8064-45d4-b986-3bd5157209ba, port_security_enabled=False, project_id=f7326c3837b4427191aafcff504110ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2506, status=DOWN, tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:24Z on network fc2e8456-8064-45d4-b986-3bd5157209ba#033[00m Dec 2 05:10:25 localhost nova_compute[281045]: 2025-12-02 10:10:25.825 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "format": "json"}]: dispatch Dec 2 05:10:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:26 localhost dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 1 addresses Dec 2 05:10:26 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host Dec 2 05:10:26 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts Dec 2 05:10:26 localhost podman[319063]: 2025-12-02 10:10:26.018182099 +0000 UTC m=+0.042340452 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:10:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:26.316 262347 INFO neutron.agent.dhcp.agent [None req-2cdfcba8-4d8c-4b77-af18-f2c12137f31c - - - - - -] DHCP configuration for ports {'6f85e6fe-549a-4116-a1ab-84bd28711afb'} is completed#033[00m Dec 2 05:10:26 localhost podman[319155]: 2025-12-02 10:10:26.591129944 +0000 UTC m=+0.102343336 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, version=7, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 05:10:26 localhost podman[319155]: 2025-12-02 10:10:26.71985949 +0000 UTC m=+0.231072872 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 05:10:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:10:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:28.127 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:24Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6f85e6fe-549a-4116-a1ab-84bd28711afb, ip_allocation=immediate, mac_address=fa:16:3e:b7:59:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:18Z, description=, dns_domain=, id=fc2e8456-8064-45d4-b986-3bd5157209ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2082968740-network, port_security_enabled=True, project_id=f7326c3837b4427191aafcff504110ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2467, status=ACTIVE, subnets=['2cf7d7fa-d01f-4f91-a974-bccbb7a5e8f6'], tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:20Z, vlan_transparent=None, network_id=fc2e8456-8064-45d4-b986-3bd5157209ba, port_security_enabled=False, project_id=f7326c3837b4427191aafcff504110ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2506, status=DOWN, tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:24Z on network fc2e8456-8064-45d4-b986-3bd5157209ba#033[00m Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:28 localhost nova_compute[281045]: 2025-12-02 10:10:28.234 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:28 localhost systemd[1]: tmp-crun.mBz26C.mount: Deactivated successfully. Dec 2 05:10:28 localhost dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 1 addresses Dec 2 05:10:28 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host Dec 2 05:10:28 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts Dec 2 05:10:28 localhost podman[319360]: 2025-12-02 10:10:28.368392833 +0000 UTC m=+0.066091482 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:10:28 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:10:28 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:10:28 localhost ceph-mgr[287188]: [progress INFO root] Completed event 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:10:28 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:10:28 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:10:28 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:28.615 262347 INFO neutron.agent.dhcp.agent [None req-ba8a10c5-e9de-4c9b-a884-2827556420a0 - - - - - -] DHCP configuration for ports {'6f85e6fe-549a-4116-a1ab-84bd28711afb'} is completed#033[00m Dec 2 05:10:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 23 KiB/s rd, 32 MiB/s wr, 48 op/s Dec 2 05:10:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:29.408 262347 INFO neutron.agent.linux.ip_lib [None req-21ab1c9e-59d0-40c0-a5fe-fa801326b9f6 - - - - - -] Device tap18876e1d-66 cannot be used as it has no MAC address#033[00m Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.436 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost kernel: device tap18876e1d-66 entered promiscuous mode Dec 2 05:10:29 localhost NetworkManager[5967]: [1764670229.4468] manager: (tap18876e1d-66): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00175|binding|INFO|Claiming lport 18876e1d-6618-49f4-b6fb-335739a3ce98 for this chassis. Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00176|binding|INFO|18876e1d-6618-49f4-b6fb-335739a3ce98: Claiming unknown Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.450 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost systemd-udevd[319410]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.465 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=192b8184-6f69-46ff-bf8b-6041bbb62345, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=18876e1d-6618-49f4-b6fb-335739a3ce98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.467 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 18876e1d-6618-49f4-b6fb-335739a3ce98 in datapath b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 bound to our chassis#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.469 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.470 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[01f678a1-b9ad-47fc-bb91-616327acf90e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00177|binding|INFO|Setting lport 18876e1d-6618-49f4-b6fb-335739a3ce98 ovn-installed in OVS Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00178|binding|INFO|Setting lport 18876e1d-6618-49f4-b6fb-335739a3ce98 up in Southbound Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.487 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost journal[229262]: ethtool ioctl error on tap18876e1d-66: No such device Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.526 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.557 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00179|binding|INFO|Removing iface tap18876e1d-66 ovn-installed in OVS Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.911 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 624d824d-4d49-4b95-a3a4-9a03de2f337b with type ""#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.913 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=192b8184-6f69-46ff-bf8b-6041bbb62345, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=18876e1d-6618-49f4-b6fb-335739a3ce98) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:29 localhost ovn_controller[153778]: 2025-12-02T10:10:29Z|00180|binding|INFO|Removing lport 18876e1d-6618-49f4-b6fb-335739a3ce98 ovn-installed in OVS Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.915 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.917 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 18876e1d-6618-49f4-b6fb-335739a3ce98 in datapath b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 unbound from our chassis#033[00m Dec 2 05:10:29 localhost nova_compute[281045]: 2025-12-02 10:10:29.919 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.919 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:29 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:29.920 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d2c1b2-7625-4fc2-aa5c-94507d41b4a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4", "force": true, "format": "json"}]: dispatch Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "force": true, "format": "json"}]: dispatch Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:30 localhost podman[319481]: Dec 2 05:10:30 localhost podman[319481]: 2025-12-02 10:10:30.49509863 +0000 UTC m=+0.076999687 container create f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:10:30 localhost systemd[1]: Started libpod-conmon-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope. Dec 2 05:10:30 localhost systemd[1]: Started libcrun container. Dec 2 05:10:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e445d47f7aa63bc26523d0df8868e3fcbc251684de06ab5b024e5fe07226834/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:30 localhost podman[319481]: 2025-12-02 10:10:30.453333686 +0000 UTC m=+0.035234773 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:30 localhost podman[319481]: 2025-12-02 10:10:30.573348094 +0000 UTC m=+0.155249161 container init f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:30 localhost dnsmasq[319512]: started, version 2.85 cachesize 150 Dec 2 05:10:30 localhost dnsmasq[319512]: DNS service limited to local subnets Dec 2 05:10:30 localhost dnsmasq[319512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:30 localhost dnsmasq[319512]: warning: no upstream servers configured Dec 2 05:10:30 localhost dnsmasq-dhcp[319512]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:30 localhost dnsmasq[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/addn_hosts - 0 addresses Dec 2 05:10:30 localhost dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/host Dec 2 05:10:30 localhost dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/opts Dec 2 05:10:30 localhost podman[319495]: 2025-12-02 10:10:30.609690241 +0000 UTC m=+0.076659566 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:10:30 localhost podman[319495]: 2025-12-02 10:10:30.617895463 +0000 UTC m=+0.084864728 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2) Dec 2 05:10:30 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:10:30 localhost podman[319481]: 2025-12-02 10:10:30.636116413 +0000 UTC m=+0.218017470 container start f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:10:30 localhost nova_compute[281045]: 2025-12-02 10:10:30.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:30 localhost kernel: device tap18876e1d-66 left promiscuous mode Dec 2 05:10:30 localhost nova_compute[281045]: 2025-12-02 10:10:30.783 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:30 localhost nova_compute[281045]: 2025-12-02 10:10:30.826 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:30 localhost dnsmasq[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/addn_hosts - 0 addresses Dec 2 05:10:30 localhost dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/host Dec 2 05:10:30 localhost dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/opts Dec 2 05:10:30 localhost podman[319538]: 2025-12-02 10:10:30.927132324 +0000 UTC m=+0.047123878 container kill f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent [None req-21ab1c9e-59d0-40c0-a5fe-fa801326b9f6 - - - - - -] Unable to reload_allocations dhcp for b08b53d5-dfe9-4f37-b0e2-3da89dc155a5.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap18876e1d-66 not found in namespace qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5. Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap18876e1d-66 not found in namespace qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5. Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:10:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.949 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] Synchronizing state#033[00m Dec 2 05:10:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 15 KiB/s rd, 20 MiB/s wr, 31 op/s Dec 2 05:10:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.417 262347 INFO neutron.agent.dhcp.agent [None req-174f6df8-7825-4331-b3aa-853f12724825 - - - - - -] DHCP configuration for ports {'a3684361-b660-4235-9667-7e02c2ba4562'} is completed#033[00m Dec 2 05:10:31 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:10:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:10:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:31 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.959 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:10:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.960 262347 INFO neutron.agent.dhcp.agent [-] Starting network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 dhcp configuration#033[00m Dec 2 05:10:32 localhost dnsmasq[319512]: exiting on receipt of SIGTERM Dec 2 05:10:32 localhost podman[319570]: 2025-12-02 10:10:32.125833856 +0000 UTC m=+0.054465924 container kill f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:32 localhost systemd[1]: libpod-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope: Deactivated successfully. Dec 2 05:10:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:32 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:10:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:10:32 localhost podman[319586]: 2025-12-02 10:10:32.201092538 +0000 UTC m=+0.059017554 container died f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:32 localhost podman[319586]: 2025-12-02 10:10:32.303880797 +0000 UTC m=+0.161805743 container cleanup f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:10:32 localhost systemd[1]: libpod-conmon-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope: Deactivated successfully. Dec 2 05:10:32 localhost podman[319585]: 2025-12-02 10:10:32.346200427 +0000 UTC m=+0.201542473 container remove f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:32.430 262347 INFO neutron.agent.dhcp.agent [None req-78366714-e941-4a99-b687-b0cee884fc81 - - - - - -] Finished network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 dhcp configuration#033[00m Dec 2 05:10:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:32.431 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:10:32 localhost systemd[1]: var-lib-containers-storage-overlay-5e445d47f7aa63bc26523d0df8868e3fcbc251684de06ab5b024e5fe07226834-merged.mount: Deactivated successfully. Dec 2 05:10:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:32 localhost systemd[1]: run-netns-qdhcp\x2db08b53d5\x2ddfe9\x2d4f37\x2db0e2\x2d3da89dc155a5.mount: Deactivated successfully. Dec 2 05:10:32 localhost nova_compute[281045]: 2025-12-02 10:10:32.646 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:32 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:32.836 2 INFO neutron.agent.securitygroups_rpc [None req-b8b78442-432d-4c51-90c0-6b0763587b8d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1794fecb-60a8-41cc-838d-a48dc5474875']#033[00m Dec 2 05:10:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e156 e156: 6 total, 6 up, 6 in Dec 2 05:10:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s Dec 2 05:10:33 localhost nova_compute[281045]: 2025-12-02 10:10:33.237 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "format": "json"}]: dispatch Dec 2 05:10:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:33 localhost podman[239757]: time="2025-12-02T10:10:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:10:33 localhost podman[239757]: @ - - [02/Dec/2025:10:10:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1" Dec 2 05:10:33 localhost podman[239757]: @ - - [02/Dec/2025:10:10:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19684 "" "Go-http-client/1.1" Dec 2 05:10:34 localhost nova_compute[281045]: 2025-12-02 10:10:34.607 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:34.609 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:34.610 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:10:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s Dec 2 05:10:35 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:35 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:35.371 2 INFO neutron.agent.securitygroups_rpc [None req-d462b9e0-1fd6-4bb8-aa5c-65cdc1c34ce0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850', '1794fecb-60a8-41cc-838d-a48dc5474875']#033[00m Dec 2 05:10:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:10:35 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:10:35 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:35 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:35 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:35 localhost nova_compute[281045]: 2025-12-02 10:10:35.828 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:36.038 2 INFO neutron.agent.securitygroups_rpc [None req-d3e51cbf-aa8e-4108-aa2d-8a899b386ca8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850']#033[00m Dec 2 05:10:36 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:36 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:36 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:36 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:10:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:10:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:10:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s Dec 2 05:10:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7", "force": true, "format": "json"}]: dispatch Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "force": true, "format": "json"}]: dispatch Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:38 localhost nova_compute[281045]: 2025-12-02 10:10:38.239 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:38 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:10:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:38 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:38 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:39 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s Dec 2 05:10:39 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:39.555 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:39Z, description=, device_id=d7b55be3-df8b-4a7a-a053-0a870505d24f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9aec39bb-d45c-46dd-bcbf-fbaa84c48978, ip_allocation=immediate, mac_address=fa:16:3e:7a:f7:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2570, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:10:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:10:39 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:39.562 2 INFO neutron.agent.securitygroups_rpc [None req-b887e192-fbe8-4997-9fd8-8fe0e62f2ad3 ffc28dac62f4495c9452fce17050d09a 16ae7f5f159c4b10a1539c2d9b52fce5 - - default default] Security group rule updated ['2409236f-431b-4039-840f-bb40e7858355']#033[00m Dec 2 05:10:39 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:39 localhost podman[319633]: 2025-12-02 10:10:39.852725025 +0000 UTC m=+0.061381067 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:10:39 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:10:39 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:10:39 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:10:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:40.170 262347 INFO neutron.agent.dhcp.agent [None req-4e731b1d-dc36-4987-bfcf-245a6fe2c36a - - - - - -] DHCP configuration for ports {'9aec39bb-d45c-46dd-bcbf-fbaa84c48978'} is completed#033[00m Dec 2 05:10:40 localhost nova_compute[281045]: 2025-12-02 10:10:40.829 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "format": "json"}]: dispatch Dec 2 05:10:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s Dec 2 05:10:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e157 e157: 6 total, 6 up, 6 in Dec 2 05:10:41 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:41.612 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:10:42 localhost openstack_network_exporter[241816]: ERROR 10:10:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:42 localhost openstack_network_exporter[241816]: ERROR 10:10:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:10:42 localhost openstack_network_exporter[241816]: ERROR 10:10:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:42 localhost openstack_network_exporter[241816]: ERROR 10:10:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:10:42 localhost openstack_network_exporter[241816]: Dec 2 05:10:42 localhost openstack_network_exporter[241816]: ERROR 10:10:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:10:42 localhost openstack_network_exporter[241816]: Dec 2 05:10:42 localhost nova_compute[281045]: 2025-12-02 10:10:42.182 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.599321) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242599397, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2422, "num_deletes": 253, "total_data_size": 4172320, "memory_usage": 4353024, "flush_reason": "Manual Compaction"} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Dec 2 05:10:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e158 e158: 6 total, 6 up, 6 in Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242622815, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2731233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22879, "largest_seqno": 25296, "table_properties": {"data_size": 2721883, "index_size": 5663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22659, "raw_average_key_size": 21, "raw_value_size": 2701971, "raw_average_value_size": 2610, "num_data_blocks": 241, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670111, "oldest_key_time": 1764670111, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 23547 microseconds, and 8085 cpu microseconds. Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.622872) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2731233 bytes OK Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.622899) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624875) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624899) EVENT_LOG_v1 {"time_micros": 1764670242624892, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4160873, prev total WAL file size 4160914, number of live WAL files 2. Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.625846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2667KB)], [36(14MB)] Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242625881, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 18100064, "oldest_snapshot_seqno": -1} Dec 2 05:10:42 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12951 keys, 16965879 bytes, temperature: kUnknown Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242714649, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 16965879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16893397, "index_size": 39037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 347808, "raw_average_key_size": 26, "raw_value_size": 16674132, "raw_average_value_size": 1287, "num_data_blocks": 1470, "num_entries": 12951, "num_filter_entries": 12951, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.714925) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 16965879 bytes Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.720355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 190.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 14.7 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(12.8) write-amplify(6.2) OK, records in: 13491, records dropped: 540 output_compression: NoCompression Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.720376) EVENT_LOG_v1 {"time_micros": 1764670242720367, "job": 20, "event": "compaction_finished", "compaction_time_micros": 88874, "compaction_time_cpu_micros": 31948, "output_level": 6, "num_output_files": 1, "total_output_size": 16965879, "num_input_records": 13491, "num_output_records": 12951, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242720752, "job": 20, "event": "table_file_deletion", "file_number": 38} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242722258, "job": 20, "event": "table_file_deletion", "file_number": 36} Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.625809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:42 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:10:42 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s Dec 2 05:10:43 localhost nova_compute[281045]: 2025-12-02 10:10:43.270 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s Dec 2 05:10:45 localhost nova_compute[281045]: 2025-12-02 10:10:45.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc", "force": true, "format": "json"}]: dispatch Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "force": true, "format": "json"}]: dispatch Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:46 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:46 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e159 e159: 6 total, 6 up, 6 in Dec 2 05:10:46 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:10:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:10:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:10:47 localhost podman[319662]: 2025-12-02 10:10:47.087593388 +0000 UTC m=+0.073653174 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 2 05:10:47 localhost podman[319657]: 2025-12-02 10:10:47.144569279 +0000 UTC m=+0.134288367 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 21 MiB/s wr, 31 op/s Dec 2 05:10:47 localhost podman[319662]: 2025-12-02 10:10:47.150835882 +0000 UTC m=+0.136895668 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:10:47 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:10:47 localhost podman[319657]: 2025-12-02 10:10:47.184547268 +0000 UTC m=+0.174266436 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:47 localhost podman[319655]: 2025-12-02 10:10:47.191420879 +0000 UTC m=+0.185510461 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:47 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:10:47 localhost podman[319656]: 2025-12-02 10:10:47.299314774 +0000 UTC m=+0.290406634 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:10:47 localhost podman[319656]: 2025-12-02 10:10:47.310092276 +0000 UTC m=+0.301184176 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:10:47 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:10:47 localhost podman[319655]: 2025-12-02 10:10:47.328406858 +0000 UTC m=+0.322496420 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent) Dec 2 05:10:47 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:10:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e160 e160: 6 total, 6 up, 6 in Dec 2 05:10:47 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:47.572 262347 INFO neutron.agent.linux.ip_lib [None req-b2c78541-a2ac-46a9-97d9-a3df1ad26c63 - - - - - -] Device tap837bee5c-e5 cannot be used as it has no MAC address#033[00m Dec 2 05:10:47 localhost nova_compute[281045]: 2025-12-02 10:10:47.593 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:47 localhost kernel: device tap837bee5c-e5 entered promiscuous mode Dec 2 05:10:47 localhost NetworkManager[5967]: [1764670247.5994] manager: (tap837bee5c-e5): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Dec 2 05:10:47 localhost ovn_controller[153778]: 2025-12-02T10:10:47Z|00181|binding|INFO|Claiming lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 for this chassis. Dec 2 05:10:47 localhost ovn_controller[153778]: 2025-12-02T10:10:47Z|00182|binding|INFO|837bee5c-e57d-4b1b-85ce-a2e06275b067: Claiming unknown Dec 2 05:10:47 localhost nova_compute[281045]: 2025-12-02 10:10:47.601 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:47 localhost systemd-udevd[319744]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:47.615 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=967d9288-e578-4b56-bd46-6584d42cca7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=837bee5c-e57d-4b1b-85ce-a2e06275b067) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:47.616 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 837bee5c-e57d-4b1b-85ce-a2e06275b067 in datapath fc2b26fc-414c-4c58-85dd-be52b87d6d85 bound to our chassis#033[00m Dec 2 05:10:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:47.618 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fc2b26fc-414c-4c58-85dd-be52b87d6d85 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:47.619 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[feca2329-f713-4d3b-afb4-acd0bed7fc5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost ovn_controller[153778]: 2025-12-02T10:10:47Z|00183|binding|INFO|Setting lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 ovn-installed in OVS Dec 2 05:10:47 localhost ovn_controller[153778]: 2025-12-02T10:10:47Z|00184|binding|INFO|Setting lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 up in Southbound Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost nova_compute[281045]: 2025-12-02 10:10:47.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost journal[229262]: ethtool ioctl error on tap837bee5c-e5: No such device Dec 2 05:10:47 localhost nova_compute[281045]: 2025-12-02 10:10:47.674 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:47 localhost nova_compute[281045]: 2025-12-02 10:10:47.703 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost nova_compute[281045]: 2025-12-02 10:10:48.306 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost podman[319813]: Dec 2 05:10:48 localhost ovn_controller[153778]: 2025-12-02T10:10:48Z|00185|binding|INFO|Removing iface tap837bee5c-e5 ovn-installed in OVS Dec 2 05:10:48 localhost ovn_controller[153778]: 2025-12-02T10:10:48Z|00186|binding|INFO|Removing lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 ovn-installed in OVS Dec 2 05:10:48 localhost podman[319813]: 2025-12-02 10:10:48.524551581 +0000 UTC m=+0.100871780 container create 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:10:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:48.527 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d3773025-b15a-463e-9639-741039d170e1 with type ""#033[00m Dec 2 05:10:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:48.529 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=967d9288-e578-4b56-bd46-6584d42cca7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=837bee5c-e57d-4b1b-85ce-a2e06275b067) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:48 localhost nova_compute[281045]: 2025-12-02 10:10:48.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:48.531 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 837bee5c-e57d-4b1b-85ce-a2e06275b067 in datapath fc2b26fc-414c-4c58-85dd-be52b87d6d85 unbound from our chassis#033[00m Dec 2 05:10:48 localhost nova_compute[281045]: 2025-12-02 10:10:48.533 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:48.534 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fc2b26fc-414c-4c58-85dd-be52b87d6d85 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:48 localhost ovn_metadata_agent[159477]: 2025-12-02 10:10:48.536 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[57986b43-ca06-414e-9bab-10e6703edcf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:48 localhost podman[319813]: 2025-12-02 10:10:48.474537025 +0000 UTC m=+0.050857254 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:48 localhost systemd[1]: Started libpod-conmon-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope. Dec 2 05:10:48 localhost systemd[1]: tmp-crun.Qse3Iz.mount: Deactivated successfully. Dec 2 05:10:48 localhost systemd[1]: Started libcrun container. Dec 2 05:10:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfcbfb63df6d9c5020125c7ab6c6cc4e5a9e3aa6c7fc6ec83f933c15cc5b3a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:48 localhost podman[319813]: 2025-12-02 10:10:48.629698143 +0000 UTC m=+0.206018352 container init 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:10:48 localhost podman[319813]: 2025-12-02 10:10:48.641541536 +0000 UTC m=+0.217861735 container start 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:10:48 localhost dnsmasq[319832]: started, version 2.85 cachesize 150 Dec 2 05:10:48 localhost dnsmasq[319832]: DNS service limited to local subnets Dec 2 05:10:48 localhost dnsmasq[319832]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:48 localhost dnsmasq[319832]: warning: no upstream servers configured Dec 2 05:10:48 localhost dnsmasq-dhcp[319832]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:48 localhost dnsmasq[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/addn_hosts - 0 addresses Dec 2 05:10:48 localhost dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/host Dec 2 05:10:48 localhost dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/opts Dec 2 05:10:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e161 e161: 6 total, 6 up, 6 in Dec 2 05:10:48 localhost nova_compute[281045]: 2025-12-02 10:10:48.732 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost kernel: device tap837bee5c-e5 left promiscuous mode Dec 2 05:10:48 localhost nova_compute[281045]: 2025-12-02 10:10:48.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:48.812 262347 INFO neutron.agent.dhcp.agent [None req-34e4fb36-6d31-41b2-9679-c59b21b814e4 - - - - - -] DHCP configuration for ports {'69f43bd0-3572-495c-9546-a66a25fa3e0c'} is completed#033[00m Dec 2 05:10:48 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:48.917 2 INFO neutron.agent.securitygroups_rpc [None req-c5737a18-0087-499e-be42-7eb006bdc7a0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd']#033[00m Dec 2 05:10:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s Dec 2 05:10:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "format": "json"}]: dispatch Dec 2 05:10:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:49 localhost dnsmasq[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/addn_hosts - 0 addresses Dec 2 05:10:49 localhost dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/host Dec 2 05:10:49 localhost podman[319852]: 2025-12-02 10:10:49.660560018 +0000 UTC m=+0.069312111 container kill 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:10:49 localhost dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/opts Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for fc2b26fc-414c-4c58-85dd-be52b87d6d85.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap837bee5c-e5 not found in namespace qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85. Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap837bee5c-e5 not found in namespace qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85. Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:10:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.693 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] Synchronizing state#033[00m Dec 2 05:10:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:10:50 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:10:50 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:50.413 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:10:50 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:50.415 262347 INFO neutron.agent.dhcp.agent [-] Starting network fc2b26fc-414c-4c58-85dd-be52b87d6d85 dhcp configuration#033[00m Dec 2 05:10:50 localhost dnsmasq[319832]: exiting on receipt of SIGTERM Dec 2 05:10:50 localhost podman[319885]: 2025-12-02 10:10:50.580607368 +0000 UTC m=+0.055060094 container kill 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:10:50 localhost systemd[1]: tmp-crun.mb3Njp.mount: Deactivated successfully. Dec 2 05:10:50 localhost systemd[1]: libpod-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope: Deactivated successfully. Dec 2 05:10:50 localhost podman[319900]: 2025-12-02 10:10:50.647426331 +0000 UTC m=+0.043769656 container died 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:10:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:50 localhost podman[319900]: 2025-12-02 10:10:50.693420354 +0000 UTC m=+0.089763599 container remove 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:10:50 localhost systemd[1]: libpod-conmon-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope: Deactivated successfully. Dec 2 05:10:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e162 e162: 6 total, 6 up, 6 in Dec 2 05:10:50 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:50 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:50 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:50 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:50 localhost nova_compute[281045]: 2025-12-02 10:10:50.833 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:51 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:51.092 262347 INFO neutron.agent.dhcp.agent [None req-620cde1a-61d1-40fb-a73c-b9690a19ebc6 - - - - - -] Finished network fc2b26fc-414c-4c58-85dd-be52b87d6d85 dhcp configuration#033[00m Dec 2 05:10:51 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:10:51.094 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] Synchronizing state complete#033[00m Dec 2 05:10:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 21 KiB/s rd, 30 MiB/s wr, 44 op/s Dec 2 05:10:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e163 e163: 6 total, 6 up, 6 in Dec 2 05:10:51 localhost systemd[1]: var-lib-containers-storage-overlay-8dfcbfb63df6d9c5020125c7ab6c6cc4e5a9e3aa6c7fc6ec83f933c15cc5b3a1-merged.mount: Deactivated successfully. Dec 2 05:10:51 localhost systemd[1]: run-netns-qdhcp\x2dfc2b26fc\x2d414c\x2d4c58\x2d85dd\x2dbe52b87d6d85.mount: Deactivated successfully. Dec 2 05:10:51 localhost nova_compute[281045]: 2025-12-02 10:10:51.617 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:51 localhost nova_compute[281045]: 2025-12-02 10:10:51.952 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:10:52 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:10:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:10:52 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:10:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 24 MiB/s wr, 152 op/s Dec 2 05:10:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:53 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:10:53 localhost nova_compute[281045]: 2025-12-02 10:10:53.311 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:10:53 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:10:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:53.457 2 INFO neutron.agent.securitygroups_rpc [None req-f5ab8c31-1c32-4d96-896a-5e2487d3e658 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd', '475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']#033[00m Dec 2 05:10:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:53.846 2 INFO neutron.agent.securitygroups_rpc [None req-054f4ed0-bf56-489e-9f2e-7d08aad333fe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']#033[00m Dec 2 05:10:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d", "force": true, "format": "json"}]: dispatch Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "force": true, "format": "json"}]: dispatch Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:10:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:10:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:10:54 localhost podman[319926]: 2025-12-02 10:10:54.065173736 +0000 UTC m=+0.065490714 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6) Dec 2 05:10:54 localhost systemd[1]: tmp-crun.dALXYj.mount: Deactivated successfully. Dec 2 05:10:54 localhost podman[319925]: 2025-12-02 10:10:54.140052567 +0000 UTC m=+0.140249540 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:10:54 localhost podman[319925]: 2025-12-02 10:10:54.146374931 +0000 UTC m=+0.146571884 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:10:54 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:10:54 localhost podman[319926]: 2025-12-02 10:10:54.202679201 +0000 UTC m=+0.202996209 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 05:10:54 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:10:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 99 op/s Dec 2 05:10:55 localhost nova_compute[281045]: 2025-12-02 10:10:55.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e164 e164: 6 total, 6 up, 6 in Dec 2 05:10:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:10:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:10:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:10:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:10:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:10:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:10:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 100 op/s Dec 2 05:10:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e165 e165: 6 total, 6 up, 6 in Dec 2 05:10:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "format": "json"}]: dispatch Dec 2 05:10:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:10:58 localhost nova_compute[281045]: 2025-12-02 10:10:58.313 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:58 localhost nova_compute[281045]: 2025-12-02 10:10:58.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:58 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:10:58 localhost podman[319986]: 2025-12-02 10:10:58.598421936 +0000 UTC m=+0.060830951 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:58 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:10:58 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:10:58 localhost neutron_sriov_agent[255428]: 2025-12-02 10:10:58.927 2 INFO neutron.agent.securitygroups_rpc [None req-84a55dcc-5035-483d-9948-fd4c09f198da 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 61 KiB/s wr, 136 op/s Dec 2 05:11:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:11:00 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:11:00 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:11:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:11:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:00 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:00 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:00 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:00 localhost nova_compute[281045]: 2025-12-02 10:11:00.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:11:00 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:00.944 262347 INFO neutron.agent.linux.ip_lib [None req-20472392-4cdf-45dc-aaab-c8dc86c333ff - - - - - -] Device tap0caba9ad-8a cannot be used as it has no MAC address#033[00m Dec 2 05:11:00 localhost podman[320009]: 2025-12-02 10:11:00.964418714 +0000 UTC m=+0.089908724 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:11:00 localhost nova_compute[281045]: 2025-12-02 10:11:00.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:00 localhost kernel: device tap0caba9ad-8a entered promiscuous mode Dec 2 05:11:00 localhost NetworkManager[5967]: [1764670260.9837] manager: (tap0caba9ad-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Dec 2 05:11:00 localhost podman[320009]: 2025-12-02 10:11:00.979912609 +0000 UTC m=+0.105402589 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:00 localhost nova_compute[281045]: 2025-12-02 10:11:00.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:00 localhost ovn_controller[153778]: 2025-12-02T10:11:00Z|00187|binding|INFO|Claiming lport 0caba9ad-8ad7-4727-b26a-176b808427fe for this chassis. Dec 2 05:11:00 localhost ovn_controller[153778]: 2025-12-02T10:11:00Z|00188|binding|INFO|0caba9ad-8ad7-4727-b26a-176b808427fe: Claiming unknown Dec 2 05:11:00 localhost systemd-udevd[320034]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:01 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:00.997 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6624a0ad-df63-46d6-9ae9-1be40b885bed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0caba9ad-8ad7-4727-b26a-176b808427fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:01 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:00.998 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 0caba9ad-8ad7-4727-b26a-176b808427fe in datapath 60fad5f6-701c-4b07-9d8c-e83f0c029e7b bound to our chassis#033[00m Dec 2 05:11:01 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:01.000 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:01 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:01.001 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[131281b5-67e0-4cb1-ba8b-5185f51d7a00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:01 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost ovn_controller[153778]: 2025-12-02T10:11:01Z|00189|binding|INFO|Setting lport 0caba9ad-8ad7-4727-b26a-176b808427fe ovn-installed in OVS Dec 2 05:11:01 localhost ovn_controller[153778]: 2025-12-02T10:11:01Z|00190|binding|INFO|Setting lport 0caba9ad-8ad7-4727-b26a-176b808427fe up in Southbound Dec 2 05:11:01 localhost nova_compute[281045]: 2025-12-02 10:11:01.054 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost journal[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device Dec 2 05:11:01 localhost nova_compute[281045]: 2025-12-02 10:11:01.091 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:01 localhost nova_compute[281045]: 2025-12-02 10:11:01.123 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 38 KiB/s wr, 49 op/s Dec 2 05:11:01 localhost podman[320106]: Dec 2 05:11:01 localhost podman[320106]: 2025-12-02 10:11:01.866615535 +0000 UTC m=+0.064192913 container create 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:11:01 localhost systemd[1]: Started libpod-conmon-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope. Dec 2 05:11:01 localhost systemd[1]: Started libcrun container. Dec 2 05:11:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c69ee66c8360ff7a699f7280effe9e3eacf5d9394951387a335cc5b4c018dbf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:01 localhost podman[320106]: 2025-12-02 10:11:01.830251428 +0000 UTC m=+0.027828876 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:01 localhost podman[320106]: 2025-12-02 10:11:01.938670148 +0000 UTC m=+0.136247536 container init 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:11:01 localhost podman[320106]: 2025-12-02 10:11:01.947217221 +0000 UTC m=+0.144794589 container start 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:11:01 localhost dnsmasq[320124]: started, version 2.85 cachesize 150 Dec 2 05:11:01 localhost dnsmasq[320124]: DNS service limited to local subnets Dec 2 05:11:01 localhost dnsmasq[320124]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:01 localhost dnsmasq[320124]: warning: no upstream servers configured Dec 2 05:11:01 localhost dnsmasq-dhcp[320124]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:11:01 localhost dnsmasq[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/addn_hosts - 0 addresses Dec 2 05:11:01 localhost dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/host Dec 2 05:11:01 localhost dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/opts Dec 2 05:11:02 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 2 05:11:02 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:02.078 262347 INFO neutron.agent.dhcp.agent [None req-95d3c2ac-d093-43c2-86df-2eae6c5b26ce - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317'} is completed#033[00m Dec 2 05:11:02 localhost podman[320142]: 2025-12-02 10:11:02.377532404 +0000 UTC m=+0.064684919 container kill 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:02 localhost dnsmasq[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/addn_hosts - 0 addresses Dec 2 05:11:02 localhost dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/host Dec 2 05:11:02 localhost dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/opts Dec 2 05:11:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:02 localhost nova_compute[281045]: 2025-12-02 10:11:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:02 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:02.709 262347 INFO neutron.agent.dhcp.agent [None req-1051ce87-9e8e-4de9-ba1b-e4321b43d48b - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317', '0caba9ad-8ad7-4727-b26a-176b808427fe'} is completed#033[00m Dec 2 05:11:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a", "force": true, "format": "json"}]: dispatch Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "force": true, "format": "json"}]: dispatch Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:11:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:02 localhost ovn_controller[153778]: 2025-12-02T10:11:02Z|00191|binding|INFO|Removing iface tap0caba9ad-8a ovn-installed in OVS Dec 2 05:11:02 localhost ovn_controller[153778]: 2025-12-02T10:11:02Z|00192|binding|INFO|Removing lport 0caba9ad-8ad7-4727-b26a-176b808427fe ovn-installed in OVS Dec 2 05:11:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:02.874 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a5949880-e32c-4c65-b409-11f4935b996c with type ""#033[00m Dec 2 05:11:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:02.876 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6624a0ad-df63-46d6-9ae9-1be40b885bed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0caba9ad-8ad7-4727-b26a-176b808427fe) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:02.878 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 0caba9ad-8ad7-4727-b26a-176b808427fe in datapath 60fad5f6-701c-4b07-9d8c-e83f0c029e7b unbound from our chassis#033[00m Dec 2 05:11:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:02.882 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:02 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:02.883 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a32f6fa1-d18a-4628-a3c7-b7eb71042bc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:02 localhost nova_compute[281045]: 2025-12-02 10:11:02.910 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:02 localhost nova_compute[281045]: 2025-12-02 10:11:02.911 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:02 localhost dnsmasq[320124]: exiting on receipt of SIGTERM Dec 2 05:11:02 localhost systemd[1]: libpod-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope: Deactivated successfully. Dec 2 05:11:02 localhost podman[320182]: 2025-12-02 10:11:02.991386945 +0000 UTC m=+0.050803782 container kill 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:11:03 localhost podman[320201]: 2025-12-02 10:11:03.058288991 +0000 UTC m=+0.047969155 container died 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:11:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:03 localhost systemd[1]: var-lib-containers-storage-overlay-7c69ee66c8360ff7a699f7280effe9e3eacf5d9394951387a335cc5b4c018dbf-merged.mount: Deactivated successfully. Dec 2 05:11:03 localhost podman[320201]: 2025-12-02 10:11:03.106473571 +0000 UTC m=+0.096153695 container remove 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:11:03 localhost systemd[1]: libpod-conmon-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope: Deactivated successfully. Dec 2 05:11:03 localhost nova_compute[281045]: 2025-12-02 10:11:03.118 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:03 localhost kernel: device tap0caba9ad-8a left promiscuous mode Dec 2 05:11:03 localhost nova_compute[281045]: 2025-12-02 10:11:03.130 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.144 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] Synchronizing state#033[00m Dec 2 05:11:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 78 KiB/s wr, 98 op/s Dec 2 05:11:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:03.181 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:03.181 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:03 localhost nova_compute[281045]: 2025-12-02 10:11:03.316 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:03 localhost podman[239757]: time="2025-12-02T10:11:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:11:03 localhost podman[239757]: @ - - [02/Dec/2025:10:11:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1" Dec 2 05:11:03 localhost podman[239757]: @ - - [02/Dec/2025:10:11:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19696 "" "Go-http-client/1.1" Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.793 262347 INFO neutron.agent.dhcp.agent [None req-dff763d9-1c11-4931-a80d-aa0322eb1ff2 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.793 262347 INFO neutron.agent.dhcp.agent [-] Starting network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b dhcp configuration#033[00m Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.870 262347 INFO neutron.agent.dhcp.agent [None req-e7309244-7ad6-4ba5-ba3f-5c3f65a5a309 - - - - - -] Finished network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b dhcp configuration#033[00m Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.871 262347 INFO neutron.agent.dhcp.agent [None req-dff763d9-1c11-4931-a80d-aa0322eb1ff2 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:11:03 localhost systemd[1]: run-netns-qdhcp\x2d60fad5f6\x2d701c\x2d4b07\x2d9d8c\x2de83f0c029e7b.mount: Deactivated successfully. Dec 2 05:11:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.955 262347 INFO neutron.agent.dhcp.agent [None req-c09d1cdd-d903-4cee-ba8c-50839f4dfc3c - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317'} is completed#033[00m Dec 2 05:11:04 localhost nova_compute[281045]: 2025-12-02 10:11:04.026 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:04 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:11:04 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:11:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:11:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:11:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:11:04 localhost nova_compute[281045]: 2025-12-02 10:11:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:04 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:04 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:04 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:04 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 73 KiB/s wr, 90 op/s Dec 2 05:11:05 localhost nova_compute[281045]: 2025-12-02 10:11:05.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:05 localhost nova_compute[281045]: 2025-12-02 10:11:05.838 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e166 e166: 6 total, 6 up, 6 in Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.624 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:11:06 localhost nova_compute[281045]: 2025-12-02 10:11:06.626 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:11:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:11:06 Dec 2 05:11:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:11:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:11:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['backups', 'vms', 'manila_metadata', '.mgr', 'images', 'manila_data', 'volumes'] Dec 2 05:11:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:11:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:11:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Dec 2 05:11:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:11:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:11:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:11:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:11:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3661005679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.074 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:11:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 65 KiB/s wr, 81 op/s Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32) Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:11:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00022164860762144054 of space, bias 4.0, pg target 0.17643229166666666 quantized to 16 (current 16) Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:11:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.283 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.284 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11506MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.285 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.285 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.499 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.500 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:11:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:07 localhost nova_compute[281045]: 2025-12-02 10:11:07.536 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:11:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e167 e167: 6 total, 6 up, 6 in Dec 2 05:11:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:11:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1405474493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.010 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.018 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.046 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.049 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.050 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:11:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:11:08 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:08 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:08 localhost nova_compute[281045]: 2025-12-02 10:11:08.318 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:08 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.052 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.053 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.053 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.112 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.113 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:09 localhost nova_compute[281045]: 2025-12-02 10:11:09.113 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 59 KiB/s wr, 75 op/s Dec 2 05:11:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf", "force": true, "format": "json"}]: dispatch Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "force": true, "format": "json"}]: dispatch Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta' Dec 2 05:11:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e168 e168: 6 total, 6 up, 6 in Dec 2 05:11:09 localhost dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 0 addresses Dec 2 05:11:09 localhost podman[320282]: 2025-12-02 10:11:09.97341617 +0000 UTC m=+0.062779201 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:09 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host Dec 2 05:11:09 localhost dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts Dec 2 05:11:10 localhost nova_compute[281045]: 2025-12-02 10:11:10.253 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost kernel: device tap8e7a6388-06 left promiscuous mode Dec 2 05:11:10 localhost ovn_controller[153778]: 2025-12-02T10:11:10Z|00193|binding|INFO|Releasing lport 8e7a6388-0616-4036-bc8b-c45817966af9 from this chassis (sb_readonly=0) Dec 2 05:11:10 localhost ovn_controller[153778]: 2025-12-02T10:11:10Z|00194|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 down in Southbound Dec 2 05:11:10 localhost nova_compute[281045]: 2025-12-02 10:11:10.271 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:10.283 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7326c3837b4427191aafcff504110ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb1a8e80-528c-4bda-8d5b-06a577344504, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e7a6388-0616-4036-bc8b-c45817966af9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:10.285 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7a6388-0616-4036-bc8b-c45817966af9 in datapath fc2e8456-8064-45d4-b986-3bd5157209ba unbound from our chassis#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:10.288 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2e8456-8064-45d4-b986-3bd5157209ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:10.289 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2571ab22-7a3f-49e3-9445-7b6e95edb2ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e169 e169: 6 total, 6 up, 6 in Dec 2 05:11:10 localhost nova_compute[281045]: 2025-12-02 10:11:10.839 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 32 KiB/s wr, 46 op/s Dec 2 05:11:11 localhost nova_compute[281045]: 2025-12-02 10:11:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:11 localhost nova_compute[281045]: 2025-12-02 10:11:11.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:11:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:11:11 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:11:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:11:11 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:11:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e170 e170: 6 total, 6 up, 6 in Dec 2 05:11:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:11:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:11:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:12 localhost openstack_network_exporter[241816]: ERROR 10:11:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:11:12 localhost openstack_network_exporter[241816]: ERROR 10:11:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:12 localhost openstack_network_exporter[241816]: ERROR 10:11:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:12 localhost openstack_network_exporter[241816]: ERROR 10:11:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:11:12 localhost openstack_network_exporter[241816]: Dec 2 05:11:12 localhost openstack_network_exporter[241816]: ERROR 10:11:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:11:12 localhost openstack_network_exporter[241816]: Dec 2 05:11:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:12 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:12 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:12 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:12 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:11:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e171 e171: 6 total, 6 up, 6 in Dec 2 05:11:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 108 KiB/s wr, 99 op/s Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.378 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:45fda55f-a67b-4a03-8e83-17717dd47f28, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:45fda55f-a67b-4a03-8e83-17717dd47f28, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.445+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45fda55f-a67b-4a03-8e83-17717dd47f28' of type subvolume Dec 2 05:11:13 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45fda55f-a67b-4a03-8e83-17717dd47f28' of type subvolume Dec 2 05:11:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "force": true, "format": "json"}]: dispatch Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28'' moved to trashcan Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:11:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < "" Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:11:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:13.555 262347 INFO neutron.agent.linux.ip_lib [None req-29612d0d-bf4e-4b65-8c8e-1bb291f2f9ee - - - - - -] Device tap24f6853f-6a cannot be used as it has no MAC address#033[00m Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost kernel: device tap24f6853f-6a entered promiscuous mode Dec 2 05:11:13 localhost NetworkManager[5967]: [1764670273.5803] manager: (tap24f6853f-6a): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Dec 2 05:11:13 localhost ovn_controller[153778]: 2025-12-02T10:11:13Z|00195|binding|INFO|Claiming lport 24f6853f-6ab0-449f-846c-b775d5c1b118 for this chassis. Dec 2 05:11:13 localhost ovn_controller[153778]: 2025-12-02T10:11:13Z|00196|binding|INFO|24f6853f-6ab0-449f-846c-b775d5c1b118: Claiming unknown Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.582 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost systemd-udevd[320339]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:13 localhost ovn_controller[153778]: 2025-12-02T10:11:13Z|00197|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 ovn-installed in OVS Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.641 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost nova_compute[281045]: 2025-12-02 10:11:13.667 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:13 localhost ovn_controller[153778]: 2025-12-02T10:11:13Z|00198|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 up in Southbound Dec 2 05:11:13 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:13.778 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39d1a8c-5e56-4b96-bd5c-0c0c80df17e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=24f6853f-6ab0-449f-846c-b775d5c1b118) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:13 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:13.780 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 24f6853f-6ab0-449f-846c-b775d5c1b118 in datapath e639c436-316b-48d5-b04e-92acf5f6e4d6 bound to our chassis#033[00m Dec 2 05:11:13 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:13.782 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e639c436-316b-48d5-b04e-92acf5f6e4d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:13 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:13.783 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5b121a-6024-4f08-b39d-f84d2ab0b327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:14 localhost podman[320394]: Dec 2 05:11:14 localhost podman[320394]: 2025-12-02 10:11:14.485381367 +0000 UTC m=+0.091495783 container create 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:11:14 localhost systemd[1]: Started libpod-conmon-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope. Dec 2 05:11:14 localhost podman[320394]: 2025-12-02 10:11:14.4409092 +0000 UTC m=+0.047023666 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:14 localhost systemd[1]: Started libcrun container. Dec 2 05:11:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dec703e64cf49252cf57f900c5246aed36f5c491b36caec2fbe59693cfdfe47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:14 localhost podman[320394]: 2025-12-02 10:11:14.56294027 +0000 UTC m=+0.169054696 container init 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:14 localhost podman[320394]: 2025-12-02 10:11:14.570078189 +0000 UTC m=+0.176192605 container start 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:11:14 localhost dnsmasq[320412]: started, version 2.85 cachesize 150 Dec 2 05:11:14 localhost dnsmasq[320412]: DNS service limited to local subnets Dec 2 05:11:14 localhost dnsmasq[320412]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:14 localhost dnsmasq[320412]: warning: no upstream servers configured Dec 2 05:11:14 localhost dnsmasq-dhcp[320412]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Dec 2 05:11:14 localhost dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 0 addresses Dec 2 05:11:14 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host Dec 2 05:11:14 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts Dec 2 05:11:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e172 e172: 6 total, 6 up, 6 in Dec 2 05:11:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:11:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:11:15 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:15 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 95 KiB/s wr, 87 op/s Dec 2 05:11:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:15 localhost nova_compute[281045]: 2025-12-02 10:11:15.842 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:16 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:16.350 262347 INFO neutron.agent.dhcp.agent [None req-e0477b0a-5e14-4156-8406-cbd688063842 - - - - - -] DHCP configuration for ports {'d8028153-5f2c-4429-a73a-6e644730b15a'} is completed#033[00m Dec 2 05:11:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e173 e173: 6 total, 6 up, 6 in Dec 2 05:11:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 78 KiB/s wr, 72 op/s Dec 2 05:11:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:17 localhost podman[320430]: 2025-12-02 10:11:17.776826812 +0000 UTC m=+0.053337210 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:11:17 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:11:17 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:11:17 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:11:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:11:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:11:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:11:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:11:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e174 e174: 6 total, 6 up, 6 in Dec 2 05:11:17 localhost systemd[1]: tmp-crun.uf3WSZ.mount: Deactivated successfully. Dec 2 05:11:17 localhost podman[320452]: 2025-12-02 10:11:17.937607542 +0000 UTC m=+0.123080042 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible) Dec 2 05:11:17 localhost podman[320445]: 2025-12-02 10:11:17.94370468 +0000 UTC m=+0.141077956 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible) Dec 2 05:11:17 localhost podman[320447]: 2025-12-02 10:11:17.902548915 +0000 UTC m=+0.095424873 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 2 05:11:17 localhost podman[320452]: 2025-12-02 10:11:17.991908731 +0000 UTC m=+0.177381241 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller) Dec 2 05:11:17 localhost podman[320446]: 2025-12-02 10:11:17.998988678 +0000 UTC m=+0.191807314 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:11:18 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:11:18 localhost podman[320445]: 2025-12-02 10:11:18.023638606 +0000 UTC m=+0.221011842 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:18 localhost nova_compute[281045]: 2025-12-02 10:11:18.027 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:18 localhost podman[320447]: 2025-12-02 10:11:18.032019503 +0000 UTC m=+0.224895511 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm) Dec 2 05:11:18 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:11:18 localhost podman[320446]: 2025-12-02 10:11:18.034996124 +0000 UTC m=+0.227814700 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:11:18 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:11:18 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:11:18 localhost nova_compute[281045]: 2025-12-02 10:11:18.381 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 40 KiB/s wr, 88 op/s Dec 2 05:11:19 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:11:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:11:20 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:11:20 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:11:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:20 localhost dnsmasq[319009]: exiting on receipt of SIGTERM Dec 2 05:11:20 localhost podman[320550]: 2025-12-02 10:11:20.481659212 +0000 UTC m=+0.056649002 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:20 localhost systemd[1]: libpod-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope: Deactivated successfully. Dec 2 05:11:20 localhost podman[320565]: 2025-12-02 10:11:20.54442165 +0000 UTC m=+0.043335072 container died bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:20 localhost systemd[1]: var-lib-containers-storage-overlay-2158c3d2b4f5070bc0f9feccb69eab6266c44d72576ad850b561465969ca2e04-merged.mount: Deactivated successfully. Dec 2 05:11:20 localhost podman[320565]: 2025-12-02 10:11:20.587283707 +0000 UTC m=+0.086197089 container remove bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:11:20 localhost systemd[1]: libpod-conmon-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope: Deactivated successfully. Dec 2 05:11:20 localhost nova_compute[281045]: 2025-12-02 10:11:20.846 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:20 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:20 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:20 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:20 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:11:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 38 KiB/s wr, 83 op/s Dec 2 05:11:21 localhost systemd[1]: run-netns-qdhcp\x2dfc2e8456\x2d8064\x2d45d4\x2db986\x2d3bd5157209ba.mount: Deactivated successfully. Dec 2 05:11:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.471 262347 INFO neutron.agent.dhcp.agent [None req-4dda3967-d52f-460b-8e51-36900425d582 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.497 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e175 e175: 6 total, 6 up, 6 in Dec 2 05:11:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.910 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:21Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1eb79f60-066d-4ce1-95d5-094eb8f1c4ab, ip_allocation=immediate, mac_address=fa:16:3e:26:39:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:05Z, description=, dns_domain=, id=e639c436-316b-48d5-b04e-92acf5f6e4d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1026036337, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53507, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2636, status=ACTIVE, subnets=['0b6177ed-0d94-40e6-82ff-9b5fca1eea57'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:10Z, vlan_transparent=None, network_id=e639c436-316b-48d5-b04e-92acf5f6e4d6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2669, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:21Z on network e639c436-316b-48d5-b04e-92acf5f6e4d6#033[00m Dec 2 05:11:22 localhost systemd[1]: tmp-crun.IuWPVj.mount: Deactivated successfully. Dec 2 05:11:22 localhost podman[320609]: 2025-12-02 10:11:22.109934014 +0000 UTC m=+0.066875286 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:22 localhost dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 1 addresses Dec 2 05:11:22 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host Dec 2 05:11:22 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts Dec 2 05:11:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:22.232 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:22 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:22.623 262347 INFO neutron.agent.dhcp.agent [None req-0d607488-2f2d-4687-b909-1781d2309cba - - - - - -] DHCP configuration for ports {'1eb79f60-066d-4ce1-95d5-094eb8f1c4ab'} is completed#033[00m Dec 2 05:11:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 66 KiB/s wr, 104 op/s Dec 2 05:11:23 localhost nova_compute[281045]: 2025-12-02 10:11:23.419 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:11:25 localhost podman[320630]: 2025-12-02 10:11:25.065938202 +0000 UTC m=+0.062417799 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9) Dec 2 05:11:25 localhost podman[320630]: 2025-12-02 10:11:25.0788818 +0000 UTC m=+0.075361367 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:11:25 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:11:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:11:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:11:25 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:25 localhost podman[320629]: 2025-12-02 10:11:25.12446048 +0000 UTC m=+0.123734482 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:11:25 localhost podman[320629]: 2025-12-02 10:11:25.133989463 +0000 UTC m=+0.133263455 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:11:25 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:11:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 54 KiB/s wr, 86 op/s Dec 2 05:11:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:25 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:25 localhost nova_compute[281045]: 2025-12-02 10:11:25.891 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 e176: 6 total, 6 up, 6 in Dec 2 05:11:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 19 op/s Dec 2 05:11:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:28 localhost nova_compute[281045]: 2025-12-02 10:11:28.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 41 KiB/s wr, 22 op/s Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:11:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:11:29 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:29.719 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:21Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1eb79f60-066d-4ce1-95d5-094eb8f1c4ab, ip_allocation=immediate, mac_address=fa:16:3e:26:39:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:05Z, description=, dns_domain=, id=e639c436-316b-48d5-b04e-92acf5f6e4d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1026036337, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53507, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2636, status=ACTIVE, subnets=['0b6177ed-0d94-40e6-82ff-9b5fca1eea57'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:10Z, vlan_transparent=None, network_id=e639c436-316b-48d5-b04e-92acf5f6e4d6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2669, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:21Z on network e639c436-316b-48d5-b04e-92acf5f6e4d6#033[00m Dec 2 05:11:29 localhost podman[320783]: 2025-12-02 10:11:29.89912494 +0000 UTC m=+0.059906442 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:11:29 localhost dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 1 addresses Dec 2 05:11:29 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host Dec 2 05:11:29 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts Dec 2 05:11:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:11:30 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:11:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:11:30 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:11:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:11:30 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:11:30 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:11:30 localhost ceph-mgr[287188]: [progress INFO root] Completed event d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:11:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:11:30 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:11:30 localhost nova_compute[281045]: 2025-12-02 10:11:30.924 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 34 KiB/s wr, 18 op/s Dec 2 05:11:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:31.202 262347 INFO neutron.agent.dhcp.agent [None req-10fde4b1-331a-4454-8c30-12f6a466fbaf - - - - - -] DHCP configuration for ports {'1eb79f60-066d-4ce1-95d5-094eb8f1c4ab'} is completed#033[00m Dec 2 05:11:31 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:11:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:11:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:31 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:31 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:31 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:31 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:31 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:31 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:11:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:11:32 localhost systemd[1]: tmp-crun.OeTcHO.mount: Deactivated successfully. Dec 2 05:11:32 localhost podman[320855]: 2025-12-02 10:11:32.091754162 +0000 UTC m=+0.094408432 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 2 05:11:32 localhost podman[320855]: 2025-12-02 10:11:32.128876563 +0000 UTC m=+0.131530833 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:32 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:11:32 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:11:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:11:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s Dec 2 05:11:33 localhost nova_compute[281045]: 2025-12-02 10:11:33.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost podman[239757]: time="2025-12-02T10:11:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:11:33 localhost podman[239757]: @ - - [02/Dec/2025:10:11:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158565 "" "Go-http-client/1.1" Dec 2 05:11:33 localhost podman[239757]: @ - - [02/Dec/2025:10:11:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19675 "" "Go-http-client/1.1" Dec 2 05:11:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:11:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:11:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:34 localhost nova_compute[281045]: 2025-12-02 10:11:34.873 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:34.876 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:34 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:34.877 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:11:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s Dec 2 05:11:35 localhost systemd[1]: tmp-crun.97FvBz.mount: Deactivated successfully. Dec 2 05:11:35 localhost dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 0 addresses Dec 2 05:11:35 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host Dec 2 05:11:35 localhost dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts Dec 2 05:11:35 localhost podman[320890]: 2025-12-02 10:11:35.490625278 +0000 UTC m=+0.058014063 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:11:35 localhost kernel: device tap24f6853f-6a left promiscuous mode Dec 2 05:11:35 localhost nova_compute[281045]: 2025-12-02 10:11:35.660 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:35 localhost ovn_controller[153778]: 2025-12-02T10:11:35Z|00199|binding|INFO|Releasing lport 24f6853f-6ab0-449f-846c-b775d5c1b118 from this chassis (sb_readonly=0) Dec 2 05:11:35 localhost ovn_controller[153778]: 2025-12-02T10:11:35Z|00200|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 down in Southbound Dec 2 05:11:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:35.678 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39d1a8c-5e56-4b96-bd5c-0c0c80df17e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=24f6853f-6ab0-449f-846c-b775d5c1b118) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:35.680 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 24f6853f-6ab0-449f-846c-b775d5c1b118 in datapath e639c436-316b-48d5-b04e-92acf5f6e4d6 unbound from our chassis#033[00m Dec 2 05:11:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:35.681 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e639c436-316b-48d5-b04e-92acf5f6e4d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:35.682 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[41ef2759-381c-40d3-a161-b4fc0f095de7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:35 localhost nova_compute[281045]: 2025-12-02 10:11:35.685 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:35 localhost nova_compute[281045]: 2025-12-02 10:11:35.686 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:35 localhost nova_compute[281045]: 2025-12-02 10:11:35.965 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:36 localhost systemd[1]: tmp-crun.cj2gmj.mount: Deactivated successfully. Dec 2 05:11:36 localhost podman[320932]: 2025-12-02 10:11:36.879097212 +0000 UTC m=+0.064194664 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:11:36 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:11:36 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:11:36 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:11:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:11:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:37 localhost nova_compute[281045]: 2025-12-02 10:11:37.121 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:11:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:11:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 96 B/s rd, 30 KiB/s wr, 4 op/s Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:37 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:37 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:11:38 localhost nova_compute[281045]: 2025-12-02 10:11:38.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:38 localhost dnsmasq[320412]: exiting on receipt of SIGTERM Dec 2 05:11:38 localhost podman[320969]: 2025-12-02 10:11:38.958472324 +0000 UTC m=+0.044890091 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:11:38 localhost systemd[1]: libpod-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope: Deactivated successfully. Dec 2 05:11:39 localhost podman[320983]: 2025-12-02 10:11:39.014440173 +0000 UTC m=+0.041812816 container died 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:39 localhost systemd[1]: tmp-crun.k6JWDO.mount: Deactivated successfully. Dec 2 05:11:39 localhost podman[320983]: 2025-12-02 10:11:39.044859588 +0000 UTC m=+0.072232211 container cleanup 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:11:39 localhost systemd[1]: libpod-conmon-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope: Deactivated successfully. Dec 2 05:11:39 localhost podman[320984]: 2025-12-02 10:11:39.077943995 +0000 UTC m=+0.103911864 container remove 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 30 op/s Dec 2 05:11:39 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:39.880 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:11:39 localhost systemd[1]: var-lib-containers-storage-overlay-3dec703e64cf49252cf57f900c5246aed36f5c491b36caec2fbe59693cfdfe47-merged.mount: Deactivated successfully. Dec 2 05:11:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:11:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:40 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:40 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:40 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:40 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:40 localhost systemd[1]: run-netns-qdhcp\x2de639c436\x2d316b\x2d48d5\x2db04e\x2d92acf5f6e4d6.mount: Deactivated successfully. Dec 2 05:11:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:40.625 262347 INFO neutron.agent.dhcp.agent [None req-41b18abb-e5a9-428d-88ca-344e6205bb08 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:40 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:40 localhost nova_compute[281045]: 2025-12-02 10:11:40.967 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:41 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:41.054 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 36 KiB/s wr, 27 op/s Dec 2 05:11:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e177 e177: 6 total, 6 up, 6 in Dec 2 05:11:42 localhost openstack_network_exporter[241816]: ERROR 10:11:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:11:42 localhost openstack_network_exporter[241816]: ERROR 10:11:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:42 localhost openstack_network_exporter[241816]: ERROR 10:11:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:42 localhost openstack_network_exporter[241816]: ERROR 10:11:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:11:42 localhost openstack_network_exporter[241816]: Dec 2 05:11:42 localhost openstack_network_exporter[241816]: ERROR 10:11:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:11:42 localhost openstack_network_exporter[241816]: Dec 2 05:11:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s Dec 2 05:11:43 localhost nova_compute[281045]: 2025-12-02 10:11:43.565 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:11:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s Dec 2 05:11:45 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:45 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:45 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:45 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:45 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e178 e178: 6 total, 6 up, 6 in Dec 2 05:11:45 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:11:45 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:11:45 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:11:45 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:11:45 localhost nova_compute[281045]: 2025-12-02 10:11:45.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:46.051 262347 INFO neutron.agent.linux.ip_lib [None req-60aba708-07de-48eb-a1c4-fafbeb26bacf - - - - - -] Device tap60565337-ba cannot be used as it has no MAC address#033[00m Dec 2 05:11:46 localhost nova_compute[281045]: 2025-12-02 10:11:46.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost kernel: device tap60565337-ba entered promiscuous mode Dec 2 05:11:46 localhost NetworkManager[5967]: [1764670306.0808] manager: (tap60565337-ba): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Dec 2 05:11:46 localhost ovn_controller[153778]: 2025-12-02T10:11:46Z|00201|binding|INFO|Claiming lport 60565337-ba9f-460c-b321-9bed6bae4c6b for this chassis. Dec 2 05:11:46 localhost ovn_controller[153778]: 2025-12-02T10:11:46Z|00202|binding|INFO|60565337-ba9f-460c-b321-9bed6bae4c6b: Claiming unknown Dec 2 05:11:46 localhost nova_compute[281045]: 2025-12-02 10:11:46.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost systemd-udevd[321024]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost ovn_controller[153778]: 2025-12-02T10:11:46Z|00203|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b ovn-installed in OVS Dec 2 05:11:46 localhost nova_compute[281045]: 2025-12-02 10:11:46.119 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost journal[229262]: ethtool ioctl error on tap60565337-ba: No such device Dec 2 05:11:46 localhost nova_compute[281045]: 2025-12-02 10:11:46.152 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost nova_compute[281045]: 2025-12-02 10:11:46.175 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost ovn_controller[153778]: 2025-12-02T10:11:46Z|00204|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b up in Southbound Dec 2 05:11:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:46.573 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3d9cd90-16cc-47f0-86ae-247f0a618c23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=60565337-ba9f-460c-b321-9bed6bae4c6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:46.576 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 60565337-ba9f-460c-b321-9bed6bae4c6b in datapath d7575463-fed8-42a9-b848-634ac68ed078 bound to our chassis#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:46.579 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3a9ca436-78c1-4e2f-8261-557907b0f38d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:46.580 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7575463-fed8-42a9-b848-634ac68ed078, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:46.581 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e77a9404-4f75-4802-89f0-8c0888b9d5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e179 e179: 6 total, 6 up, 6 in Dec 2 05:11:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 97 op/s Dec 2 05:11:47 localhost podman[321095]: Dec 2 05:11:47 localhost podman[321095]: 2025-12-02 10:11:47.333670156 +0000 UTC m=+0.094944868 container create 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:47 localhost systemd[1]: Started libpod-conmon-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope. Dec 2 05:11:47 localhost podman[321095]: 2025-12-02 10:11:47.287336012 +0000 UTC m=+0.048610674 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:47 localhost systemd[1]: Started libcrun container. Dec 2 05:11:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a238a32a00c88b302c7631d7402cd3a543c022c2ba3aed4b7050aab5db379c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:47 localhost podman[321095]: 2025-12-02 10:11:47.406434452 +0000 UTC m=+0.167709114 container init 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:11:47 localhost podman[321095]: 2025-12-02 10:11:47.415557192 +0000 UTC m=+0.176831854 container start 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:11:47 localhost dnsmasq[321113]: started, version 2.85 cachesize 150 Dec 2 05:11:47 localhost dnsmasq[321113]: DNS service limited to local subnets Dec 2 05:11:47 localhost dnsmasq[321113]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:47 localhost dnsmasq[321113]: warning: no upstream servers configured Dec 2 05:11:47 localhost dnsmasq-dhcp[321113]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:11:47 localhost dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 0 addresses Dec 2 05:11:47 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host Dec 2 05:11:47 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts Dec 2 05:11:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:48.144 262347 INFO neutron.agent.dhcp.agent [None req-8e89a2b8-a075-49a3-b365-0b6cd35efe3e - - - - - -] DHCP configuration for ports {'764c6210-436e-4e65-8738-de5d89857e38'} is completed#033[00m Dec 2 05:11:48 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:11:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:48 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:48 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:48 localhost nova_compute[281045]: 2025-12-02 10:11:48.593 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:48.832 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:48Z, description=, device_id=a96fc995-3987-4eed-90c5-508db1a52dc8, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e37b6248-e731-4bac-b771-73f326b6c55b, ip_allocation=immediate, mac_address=fa:16:3e:5c:a4:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2722, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:11:48Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:11:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:11:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:11:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:11:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:11:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 2.9 MiB/s wr, 131 op/s Dec 2 05:11:49 localhost podman[321123]: 2025-12-02 10:11:49.125648578 +0000 UTC m=+0.115721987 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:11:49 localhost podman[321117]: 2025-12-02 10:11:49.098549105 +0000 UTC m=+0.094436533 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 05:11:49 localhost podman[321123]: 2025-12-02 10:11:49.206079599 +0000 UTC m=+0.196153018 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:11:49 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:11:49 localhost podman[321205]: 2025-12-02 10:11:49.239726082 +0000 UTC m=+0.055929469 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:49 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:11:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:11:49 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:11:49 localhost podman[321116]: 2025-12-02 10:11:49.155501815 +0000 UTC m=+0.154750526 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:11:49 localhost podman[321116]: 2025-12-02 10:11:49.285596783 +0000 UTC m=+0.284845494 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:11:49 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:11:49 localhost podman[321117]: 2025-12-02 10:11:49.33595675 +0000 UTC m=+0.331844178 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:11:49 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:11:49 localhost podman[321115]: 2025-12-02 10:11:49.344124101 +0000 UTC m=+0.346687184 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:11:49 localhost podman[321115]: 2025-12-02 10:11:49.425932934 +0000 UTC m=+0.428495957 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:49 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:11:49 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e180 e180: 6 total, 6 up, 6 in Dec 2 05:11:49 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:49.639 262347 INFO neutron.agent.dhcp.agent [None req-246bc6ef-4f58-49a3-bc56-0df20ecec196 - - - - - -] DHCP configuration for ports {'e37b6248-e731-4bac-b771-73f326b6c55b'} is completed#033[00m Dec 2 05:11:50 localhost neutron_sriov_agent[255428]: 2025-12-02 10:11:50.333 2 INFO neutron.agent.securitygroups_rpc [None req-c7a943f3-fab0-4b36-9210-2f6cba57e1de defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']#033[00m Dec 2 05:11:50 localhost nova_compute[281045]: 2025-12-02 10:11:50.973 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 24 KiB/s wr, 67 op/s Dec 2 05:11:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 2 05:11:52 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 2 05:11:52 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:11:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:11:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 55 KiB/s wr, 129 op/s Dec 2 05:11:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:11:53 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/283666617' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:11:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:53 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:53 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:53 localhost nova_compute[281045]: 2025-12-02 10:11:53.620 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e181 e181: 6 total, 6 up, 6 in Dec 2 05:11:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 51 KiB/s wr, 119 op/s Dec 2 05:11:56 localhost nova_compute[281045]: 2025-12-02 10:11:56.017 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:11:56 localhost podman[321233]: 2025-12-02 10:11:56.108319152 +0000 UTC m=+0.073216491 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:11:56 localhost podman[321234]: 2025-12-02 10:11:56.174216657 +0000 UTC m=+0.135535306 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 05:11:56 localhost podman[321234]: 2025-12-02 10:11:56.185789803 +0000 UTC m=+0.147108452 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal) Dec 2 05:11:56 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:11:56 localhost podman[321233]: 2025-12-02 10:11:56.241096591 +0000 UTC m=+0.205993940 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:11:56 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:11:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:11:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 33 KiB/s wr, 68 op/s Dec 2 05:11:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:11:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:11:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:11:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:11:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:58 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:58.417 262347 INFO neutron.agent.linux.ip_lib [None req-8d0c34da-6416-4236-856d-a92b7d7df30e - - - - - -] Device tap5a59ed58-8e cannot be used as it has no MAC address#033[00m Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.475 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost kernel: device tap5a59ed58-8e entered promiscuous mode Dec 2 05:11:58 localhost ovn_controller[153778]: 2025-12-02T10:11:58Z|00205|binding|INFO|Claiming lport 5a59ed58-8e8e-4218-a027-de857358efdd for this chassis. Dec 2 05:11:58 localhost ovn_controller[153778]: 2025-12-02T10:11:58Z|00206|binding|INFO|5a59ed58-8e8e-4218-a027-de857358efdd: Claiming unknown Dec 2 05:11:58 localhost NetworkManager[5967]: [1764670318.4848] manager: (tap5a59ed58-8e): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.486 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost systemd-udevd[321285]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:58 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:58.495 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=569352f8-5776-4fd7-bf95-e3a12d36086c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5a59ed58-8e8e-4218-a027-de857358efdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:58 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:58.498 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5a59ed58-8e8e-4218-a027-de857358efdd in datapath dcbd2fde-cd87-4087-93b9-a7b43b07dcbf bound to our chassis#033[00m Dec 2 05:11:58 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:58.500 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcbd2fde-cd87-4087-93b9-a7b43b07dcbf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:58 localhost ovn_metadata_agent[159477]: 2025-12-02 10:11:58.502 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ef320c61-0674-483c-a33a-731a808c274f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.520 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost ovn_controller[153778]: 2025-12-02T10:11:58Z|00207|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd ovn-installed in OVS Dec 2 05:11:58 localhost ovn_controller[153778]: 2025-12-02T10:11:58Z|00208|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd up in Southbound Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.523 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost journal[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.557 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.584 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost nova_compute[281045]: 2025-12-02 10:11:58.622 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:58 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e182 e182: 6 total, 6 up, 6 in Dec 2 05:11:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 53 KiB/s wr, 127 op/s Dec 2 05:11:59 localhost podman[321354]: Dec 2 05:11:59 localhost podman[321354]: 2025-12-02 10:11:59.364313567 +0000 UTC m=+0.097787496 container create 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:59 localhost systemd[1]: Started libpod-conmon-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope. Dec 2 05:11:59 localhost podman[321354]: 2025-12-02 10:11:59.317298943 +0000 UTC m=+0.050772902 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:59 localhost systemd[1]: Started libcrun container. Dec 2 05:11:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf42c28c81b21a1deda3e252f2a551239aaaaf7b56461c20fc989a8f045479/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:59 localhost podman[321354]: 2025-12-02 10:11:59.440438057 +0000 UTC m=+0.173911966 container init 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:11:59 localhost podman[321354]: 2025-12-02 10:11:59.446691159 +0000 UTC m=+0.180165068 container start 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:59 localhost dnsmasq[321373]: started, version 2.85 cachesize 150 Dec 2 05:11:59 localhost dnsmasq[321373]: DNS service limited to local subnets Dec 2 05:11:59 localhost dnsmasq[321373]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:59 localhost dnsmasq[321373]: warning: no upstream servers configured Dec 2 05:11:59 localhost dnsmasq-dhcp[321373]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:11:59 localhost dnsmasq[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/addn_hosts - 0 addresses Dec 2 05:11:59 localhost dnsmasq-dhcp[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/host Dec 2 05:11:59 localhost dnsmasq-dhcp[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/opts Dec 2 05:11:59 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:11:59.630 262347 INFO neutron.agent.dhcp.agent [None req-7af2711b-304d-4fae-9a95-8fb2b43f7bcf - - - - - -] DHCP configuration for ports {'eaffce96-ae12-457e-84d2-2c06058bbc40'} is completed#033[00m Dec 2 05:11:59 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e183 e183: 6 total, 6 up, 6 in Dec 2 05:12:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:12:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:01 localhost nova_compute[281045]: 2025-12-02 10:12:01.031 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:01 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:01.061 2 INFO neutron.agent.securitygroups_rpc [None req-bd3bdc93-0ba6-42a3-9063-ee94eddd1f8f defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']#033[00m Dec 2 05:12:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 25 KiB/s wr, 72 op/s Dec 2 05:12:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:12:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:12:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:12:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:12:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:01 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:01 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:02 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:02 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:02 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:02 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:12:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e184 e184: 6 total, 6 up, 6 in Dec 2 05:12:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:12:03 localhost podman[321375]: 2025-12-02 10:12:03.049841292 +0000 UTC m=+0.053880256 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:03 localhost podman[321375]: 2025-12-02 10:12:03.061640924 +0000 UTC m=+0.065679878 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:03 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:12:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 55 KiB/s wr, 239 op/s Dec 2 05:12:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:03 localhost nova_compute[281045]: 2025-12-02 10:12:03.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:03 localhost nova_compute[281045]: 2025-12-02 10:12:03.625 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:03 localhost podman[239757]: time="2025-12-02T10:12:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:12:03 localhost podman[239757]: @ - - [02/Dec/2025:10:12:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160393 "" "Go-http-client/1.1" Dec 2 05:12:03 localhost podman[239757]: @ - - [02/Dec/2025:10:12:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1" Dec 2 05:12:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:12:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:12:03 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:12:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:12:03 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:04 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 26 KiB/s wr, 149 op/s Dec 2 05:12:05 localhost nova_compute[281045]: 2025-12-02 10:12:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.037 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e185 e185: 6 total, 6 up, 6 in Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.556 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:12:06 localhost nova_compute[281045]: 2025-12-02 10:12:06.558 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:12:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e186 e186: 6 total, 6 up, 6 in Dec 2 05:12:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:12:06 Dec 2 05:12:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:12:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:12:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['.mgr', 'manila_data', 'images', 'backups', 'vms', 'volumes', 'manila_metadata'] Dec 2 05:12:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:12:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:12:06 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/200831030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.015 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 28 KiB/s wr, 160 op/s Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.198 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.200 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11510MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.202 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014844731469849247 of space, bias 1.0, pg target 0.2963998050146566 quantized to 32 (current 32) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:12:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0003675059324399777 of space, bias 4.0, pg target 0.2925347222222222 quantized to 16 (current 16) Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:12:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.272 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.272 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.314 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:12:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 2 05:12:07 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 2 05:12:07 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:07 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:07 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:07 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e187 e187: 6 total, 6 up, 6 in Dec 2 05:12:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:12:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2956956327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.821 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.827 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.901 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.903 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:12:07 localhost nova_compute[281045]: 2025-12-02 10:12:07.904 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:08 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.627 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.905 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.923 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.923 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.924 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.941 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:08 localhost nova_compute[281045]: 2025-12-02 10:12:08.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 118 op/s Dec 2 05:12:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e188 e188: 6 total, 6 up, 6 in Dec 2 05:12:09 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:09.607 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:09Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7, ip_allocation=immediate, mac_address=fa:16:3e:e1:29:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:38Z, description=, dns_domain=, id=d7575463-fed8-42a9-b848-634ac68ed078, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-2057032213, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54044, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2706, status=ACTIVE, subnets=['b2084c2f-9ef2-4632-8e41-02c37dcc4849'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:41Z, vlan_transparent=None, network_id=d7575463-fed8-42a9-b848-634ac68ed078, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2781, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:09Z on network d7575463-fed8-42a9-b848-634ac68ed078#033[00m Dec 2 05:12:09 localhost podman[321456]: 2025-12-02 10:12:09.942643514 +0000 UTC m=+0.066250376 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:09 localhost dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 1 addresses Dec 2 05:12:09 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host Dec 2 05:12:09 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts Dec 2 05:12:10 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:10.228 262347 INFO neutron.agent.dhcp.agent [None req-463e93d4-7f91-4ad0-a6db-ae4ca58f0bc2 - - - - - -] DHCP configuration for ports {'fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7'} is completed#033[00m Dec 2 05:12:10 localhost nova_compute[281045]: 2025-12-02 10:12:10.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:12:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:12:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:12:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:12:10 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:11 localhost nova_compute[281045]: 2025-12-02 10:12:11.042 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 149 op/s Dec 2 05:12:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:11 localhost nova_compute[281045]: 2025-12-02 10:12:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:11 localhost nova_compute[281045]: 2025-12-02 10:12:11.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:12:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e189 e189: 6 total, 6 up, 6 in Dec 2 05:12:12 localhost openstack_network_exporter[241816]: ERROR 10:12:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:12:12 localhost openstack_network_exporter[241816]: ERROR 10:12:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:12 localhost openstack_network_exporter[241816]: ERROR 10:12:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:12 localhost openstack_network_exporter[241816]: ERROR 10:12:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:12:12 localhost openstack_network_exporter[241816]: Dec 2 05:12:12 localhost openstack_network_exporter[241816]: ERROR 10:12:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:12:12 localhost openstack_network_exporter[241816]: Dec 2 05:12:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:12 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:12.764 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:09Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7, ip_allocation=immediate, mac_address=fa:16:3e:e1:29:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:38Z, description=, dns_domain=, id=d7575463-fed8-42a9-b848-634ac68ed078, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-2057032213, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54044, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2706, status=ACTIVE, subnets=['b2084c2f-9ef2-4632-8e41-02c37dcc4849'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:41Z, vlan_transparent=None, network_id=d7575463-fed8-42a9-b848-634ac68ed078, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2781, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:09Z on network d7575463-fed8-42a9-b848-634ac68ed078#033[00m Dec 2 05:12:13 localhost systemd[1]: tmp-crun.DBCrcP.mount: Deactivated successfully. Dec 2 05:12:13 localhost dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 1 addresses Dec 2 05:12:13 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host Dec 2 05:12:13 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts Dec 2 05:12:13 localhost podman[321493]: 2025-12-02 10:12:13.005379802 +0000 UTC m=+0.070425025 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 274 op/s Dec 2 05:12:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.193 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:12Z, description=, device_id=11fb415a-fd46-4f5e-91b3-127c51ee0b41, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=35bde078-363a-4aaf-a0b9-6375ba936eaf, ip_allocation=immediate, mac_address=fa:16:3e:cb:64:57, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2790, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:12Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:12:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:13.251 2 INFO neutron.agent.securitygroups_rpc [None req-1b9cf27d-4f71-42a8-aff0-a386ad5e469f 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.307 262347 INFO neutron.agent.dhcp.agent [None req-ead53d56-2f7b-48ee-b3c3-5443783ab476 - - - - - -] DHCP configuration for ports {'fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7'} is completed#033[00m Dec 2 05:12:13 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:12:13 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:13 localhost podman[321531]: 2025-12-02 10:12:13.419379223 +0000 UTC m=+0.059675075 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:12:13 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:13 localhost systemd[1]: tmp-crun.R3SS6a.mount: Deactivated successfully. Dec 2 05:12:13 localhost nova_compute[281045]: 2025-12-02 10:12:13.629 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:13 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.637 262347 INFO neutron.agent.dhcp.agent [None req-3f4bc7e2-5de8-4280-899a-8f4c7281456e - - - - - -] DHCP configuration for ports {'35bde078-363a-4aaf-a0b9-6375ba936eaf'} is completed#033[00m Dec 2 05:12:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e190 e190: 6 total, 6 up, 6 in Dec 2 05:12:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:12:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:12:14 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:12:14 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:12:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:12:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:14 localhost dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 0 addresses Dec 2 05:12:14 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host Dec 2 05:12:14 localhost dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts Dec 2 05:12:14 localhost podman[321570]: 2025-12-02 10:12:14.445551373 +0000 UTC m=+0.060188111 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:12:14 localhost nova_compute[281045]: 2025-12-02 10:12:14.664 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:14 localhost kernel: device tap60565337-ba left promiscuous mode Dec 2 05:12:14 localhost ovn_controller[153778]: 2025-12-02T10:12:14Z|00209|binding|INFO|Releasing lport 60565337-ba9f-460c-b321-9bed6bae4c6b from this chassis (sb_readonly=0) Dec 2 05:12:14 localhost ovn_controller[153778]: 2025-12-02T10:12:14Z|00210|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b down in Southbound Dec 2 05:12:14 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:14.672 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3d9cd90-16cc-47f0-86ae-247f0a618c23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=60565337-ba9f-460c-b321-9bed6bae4c6b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:14 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:14.674 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 60565337-ba9f-460c-b321-9bed6bae4c6b in datapath d7575463-fed8-42a9-b848-634ac68ed078 unbound from our chassis#033[00m Dec 2 05:12:14 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:14.677 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7575463-fed8-42a9-b848-634ac68ed078, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:14 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:14.678 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b20a0ab-386d-4dca-b9b4-5071f4e74ebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:14 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:14 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:14 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:12:14 localhost nova_compute[281045]: 2025-12-02 10:12:14.689 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:14 localhost sshd[321594]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:12:14 localhost sshd[321596]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:12:14 localhost sshd[321598]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:12:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta' Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:16 localhost nova_compute[281045]: 2025-12-02 10:12:16.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:16 localhost podman[321616]: 2025-12-02 10:12:16.46551188 +0000 UTC m=+0.048369157 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:12:16 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:12:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:16 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e191 e191: 6 total, 6 up, 6 in Dec 2 05:12:16 localhost nova_compute[281045]: 2025-12-02 10:12:16.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch Dec 2 05:12:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:12:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s Dec 2 05:12:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:12:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:12:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:17 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:17.471 2 INFO neutron.agent.securitygroups_rpc [None req-12cf221e-0940-4511-92ba-1f5763df32bf 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:17 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:17 localhost systemd[1]: tmp-crun.jqqcn0.mount: Deactivated successfully. Dec 2 05:12:17 localhost dnsmasq[321373]: exiting on receipt of SIGTERM Dec 2 05:12:17 localhost podman[321656]: 2025-12-02 10:12:17.88513119 +0000 UTC m=+0.083184527 container kill 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:17 localhost systemd[1]: libpod-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope: Deactivated successfully. Dec 2 05:12:17 localhost podman[321670]: 2025-12-02 10:12:17.955722819 +0000 UTC m=+0.055484496 container died 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:12:17 localhost podman[321670]: 2025-12-02 10:12:17.983615366 +0000 UTC m=+0.083376983 container cleanup 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:12:17 localhost systemd[1]: libpod-conmon-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope: Deactivated successfully. Dec 2 05:12:18 localhost podman[321672]: 2025-12-02 10:12:18.02638203 +0000 UTC m=+0.119274776 container remove 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:12:18 localhost nova_compute[281045]: 2025-12-02 10:12:18.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:18 localhost ovn_controller[153778]: 2025-12-02T10:12:18Z|00211|binding|INFO|Releasing lport 5a59ed58-8e8e-4218-a027-de857358efdd from this chassis (sb_readonly=0) Dec 2 05:12:18 localhost kernel: device tap5a59ed58-8e left promiscuous mode Dec 2 05:12:18 localhost ovn_controller[153778]: 2025-12-02T10:12:18Z|00212|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd down in Southbound Dec 2 05:12:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:18.084 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=569352f8-5776-4fd7-bf95-e3a12d36086c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5a59ed58-8e8e-4218-a027-de857358efdd) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:18.085 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5a59ed58-8e8e-4218-a027-de857358efdd in datapath dcbd2fde-cd87-4087-93b9-a7b43b07dcbf unbound from our chassis#033[00m Dec 2 05:12:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:18.087 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:18 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:18.088 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c70fd4f2-38e6-45d2-bd77-f9803fb144f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:18 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:18 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:18 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:18 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:18 localhost nova_compute[281045]: 2025-12-02 10:12:18.098 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:18 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:12:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:18 localhost podman[321717]: 2025-12-02 10:12:18.13182419 +0000 UTC m=+0.045861960 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:12:18 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:18 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:18.575 262347 INFO neutron.agent.dhcp.agent [None req-09c6e4f1-e646-4647-90b1-164dcdf57259 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:18 localhost nova_compute[281045]: 2025-12-02 10:12:18.631 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:18 localhost systemd[1]: var-lib-containers-storage-overlay-dcaf42c28c81b21a1deda3e252f2a551239aaaaf7b56461c20fc989a8f045479-merged.mount: Deactivated successfully. Dec 2 05:12:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:18 localhost systemd[1]: run-netns-qdhcp\x2ddcbd2fde\x2dcd87\x2d4087\x2d93b9\x2da7b43b07dcbf.mount: Deactivated successfully. Dec 2 05:12:19 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:19.099 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:19 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "format": "json"}]: dispatch Dec 2 05:12:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 71 KiB/s wr, 180 op/s Dec 2 05:12:19 localhost nova_compute[281045]: 2025-12-02 10:12:19.376 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:12:20 localhost podman[321738]: 2025-12-02 10:12:20.080916489 +0000 UTC m=+0.085486677 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:12:20 localhost systemd[1]: tmp-crun.kOgTsV.mount: Deactivated successfully. Dec 2 05:12:20 localhost podman[321739]: 2025-12-02 10:12:20.134590129 +0000 UTC m=+0.135728232 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:12:20 localhost podman[321739]: 2025-12-02 10:12:20.146182695 +0000 UTC m=+0.147320778 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:12:20 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:12:20 localhost podman[321738]: 2025-12-02 10:12:20.163976771 +0000 UTC m=+0.168546929 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:12:20 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:12:20 localhost podman[321746]: 2025-12-02 10:12:20.149444105 +0000 UTC m=+0.141645104 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:20 localhost podman[321746]: 2025-12-02 10:12:20.229500605 +0000 UTC m=+0.221701624 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:20 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:12:20 localhost podman[321740]: 2025-12-02 10:12:20.290978933 +0000 UTC m=+0.289121714 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:20 localhost podman[321740]: 2025-12-02 10:12:20.304845679 +0000 UTC m=+0.302988450 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:12:20 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:12:20 localhost dnsmasq[321113]: exiting on receipt of SIGTERM Dec 2 05:12:20 localhost podman[321835]: 2025-12-02 10:12:20.384620771 +0000 UTC m=+0.039907687 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:12:20 localhost systemd[1]: libpod-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope: Deactivated successfully. Dec 2 05:12:20 localhost podman[321851]: 2025-12-02 10:12:20.427402335 +0000 UTC m=+0.030785847 container died 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:12:20 localhost podman[321851]: 2025-12-02 10:12:20.445224724 +0000 UTC m=+0.048608216 container cleanup 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:20 localhost systemd[1]: libpod-conmon-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope: Deactivated successfully. Dec 2 05:12:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:20 localhost podman[321850]: 2025-12-02 10:12:20.514326416 +0000 UTC m=+0.116237572 container remove 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:12:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 2 05:12:20 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 2 05:12:20 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:20 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:20.709 262347 INFO neutron.agent.dhcp.agent [None req-4e982f42-dd31-44e3-9576-1423f2ff4236 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:20 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:20 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:20 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:20 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:12:20 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:20.820 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:21 localhost systemd[1]: var-lib-containers-storage-overlay-8a238a32a00c88b302c7631d7402cd3a543c022c2ba3aed4b7050aab5db379c8-merged.mount: Deactivated successfully. Dec 2 05:12:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:21 localhost systemd[1]: run-netns-qdhcp\x2dd7575463\x2dfed8\x2d42a9\x2db848\x2d634ac68ed078.mount: Deactivated successfully. Dec 2 05:12:21 localhost nova_compute[281045]: 2025-12-02 10:12:21.077 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 52 op/s Dec 2 05:12:21 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:21.423 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e192 e192: 6 total, 6 up, 6 in Dec 2 05:12:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120", "force": true, "format": "json"}]: dispatch Dec 2 05:12:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta' Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "force": true, "format": "json"}]: dispatch Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta' Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s Dec 2 05:12:23 localhost nova_compute[281045]: 2025-12-02 10:12:23.634 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 2 05:12:23 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:23 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID bob with tenant a241a07e4161486091e8de3f95a1d6c6 Dec 2 05:12:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:12:23 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:24 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:24.358 262347 INFO neutron.agent.linux.ip_lib [None req-10d4fab1-1edf-451c-862e-68882ca8de40 - - - - - -] Device tap2f12f501-39 cannot be used as it has no MAC address#033[00m Dec 2 05:12:24 localhost nova_compute[281045]: 2025-12-02 10:12:24.422 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost kernel: device tap2f12f501-39 entered promiscuous mode Dec 2 05:12:24 localhost NetworkManager[5967]: [1764670344.4307] manager: (tap2f12f501-39): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Dec 2 05:12:24 localhost ovn_controller[153778]: 2025-12-02T10:12:24Z|00213|binding|INFO|Claiming lport 2f12f501-3942-418c-89e6-d03f08b5b903 for this chassis. Dec 2 05:12:24 localhost ovn_controller[153778]: 2025-12-02T10:12:24Z|00214|binding|INFO|2f12f501-3942-418c-89e6-d03f08b5b903: Claiming unknown Dec 2 05:12:24 localhost nova_compute[281045]: 2025-12-02 10:12:24.432 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost systemd-udevd[321886]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:24.443 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a1854cb9cd7e49c4a6a223acc8d74075', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=290a207a-1202-4b25-ae81-ba8a96163204, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2f12f501-3942-418c-89e6-d03f08b5b903) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:24.445 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 2f12f501-3942-418c-89e6-d03f08b5b903 in datapath 0ed6501a-31af-475e-83c5-b9d22d72adda bound to our chassis#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:24.446 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0ed6501a-31af-475e-83c5-b9d22d72adda or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:24.447 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[59520c80-154e-4459-b2ae-203ec0c57b34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost ovn_controller[153778]: 2025-12-02T10:12:24Z|00215|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 ovn-installed in OVS Dec 2 05:12:24 localhost ovn_controller[153778]: 2025-12-02T10:12:24Z|00216|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 up in Southbound Dec 2 05:12:24 localhost nova_compute[281045]: 2025-12-02 10:12:24.465 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost journal[229262]: ethtool ioctl error on tap2f12f501-39: No such device Dec 2 05:12:24 localhost nova_compute[281045]: 2025-12-02 10:12:24.499 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost nova_compute[281045]: 2025-12-02 10:12:24.528 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e193 e193: 6 total, 6 up, 6 in Dec 2 05:12:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s Dec 2 05:12:25 localhost podman[321956]: Dec 2 05:12:25 localhost podman[321956]: 2025-12-02 10:12:25.343402398 +0000 UTC m=+0.084882019 container create b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:12:25 localhost systemd[1]: Started libpod-conmon-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope. Dec 2 05:12:25 localhost systemd[1]: Started libcrun container. Dec 2 05:12:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2824933fa3ea201d2880e13a9881e7be88073d7cf00c05154858ec9e5ab1017d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:25 localhost podman[321956]: 2025-12-02 10:12:25.396117367 +0000 UTC m=+0.137596998 container init b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:25 localhost podman[321956]: 2025-12-02 10:12:25.304227854 +0000 UTC m=+0.045707475 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:25 localhost podman[321956]: 2025-12-02 10:12:25.407427985 +0000 UTC m=+0.148907636 container start b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:12:25 localhost dnsmasq[321974]: started, version 2.85 cachesize 150 Dec 2 05:12:25 localhost dnsmasq[321974]: DNS service limited to local subnets Dec 2 05:12:25 localhost dnsmasq[321974]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:25 localhost dnsmasq[321974]: warning: no upstream servers configured Dec 2 05:12:25 localhost dnsmasq-dhcp[321974]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:12:25 localhost dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 0 addresses Dec 2 05:12:25 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host Dec 2 05:12:25 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts Dec 2 05:12:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:25.602 262347 INFO neutron.agent.dhcp.agent [None req-6ca692d4-9cdd-4385-8125-bb7622ea7b1e - - - - - -] DHCP configuration for ports {'66e17cf8-df51-450a-8146-fab121409d73'} is completed#033[00m Dec 2 05:12:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e194 e194: 6 total, 6 up, 6 in Dec 2 05:12:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:26.065 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:25Z, description=, device_id=8df809b8-facb-40b9-bb8b-01b96dff964d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c4b1ca5d-6e39-4259-ad69-632c6ab0e0c6, ip_allocation=immediate, mac_address=fa:16:3e:13:f2:f0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2856, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:12:26 localhost nova_compute[281045]: 2025-12-02 10:12:26.080 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:884b3444-4a7a-4744-9a4b-7d6039625376, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:884b3444-4a7a-4744-9a4b-7d6039625376, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:26 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:26.218+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '884b3444-4a7a-4744-9a4b-7d6039625376' of type subvolume Dec 2 05:12:26 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '884b3444-4a7a-4744-9a4b-7d6039625376' of type subvolume Dec 2 05:12:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "force": true, "format": "json"}]: dispatch Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376'' moved to trashcan Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < "" Dec 2 05:12:26 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:12:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:26 localhost podman[321990]: 2025-12-02 10:12:26.310337858 +0000 UTC m=+0.067198666 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:26 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:12:26 localhost systemd[1]: tmp-crun.zOcbl5.mount: Deactivated successfully. Dec 2 05:12:26 localhost podman[321991]: 2025-12-02 10:12:26.393735801 +0000 UTC m=+0.146401699 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, release=1755695350, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 2 05:12:26 localhost podman[321991]: 2025-12-02 10:12:26.407777472 +0000 UTC m=+0.160443330 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Dec 2 05:12:26 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:12:26 localhost podman[322018]: 2025-12-02 10:12:26.449516765 +0000 UTC m=+0.076395958 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:12:26 localhost podman[322018]: 2025-12-02 10:12:26.486955225 +0000 UTC m=+0.113834418 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:12:26 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:12:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:26.560 262347 INFO neutron.agent.dhcp.agent [None req-1dfcae20-88e2-4169-95a4-2b08e9d5a9bb - - - - - -] DHCP configuration for ports {'c4b1ca5d-6e39-4259-ad69-632c6ab0e0c6'} is completed#033[00m Dec 2 05:12:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e195 e195: 6 total, 6 up, 6 in Dec 2 05:12:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:27.051 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:26Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd561d24-857a-48f4-96ac-6e7e7342afec, ip_allocation=immediate, mac_address=fa:16:3e:1d:40:1e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2860, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:26Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:12:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s wr, 4 op/s Dec 2 05:12:27 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:12:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:27 localhost podman[322069]: 2025-12-02 10:12:27.255339865 +0000 UTC m=+0.051084971 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:12:27 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:27.522 262347 INFO neutron.agent.dhcp.agent [None req-8ea8bd61-bb0c-4381-b54b-4234e8bd708e - - - - - -] DHCP configuration for ports {'cd561d24-857a-48f4-96ac-6e7e7342afec'} is completed#033[00m Dec 2 05:12:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e196 e196: 6 total, 6 up, 6 in Dec 2 05:12:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta.tmp' Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta.tmp' to config b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta' Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:28 localhost nova_compute[281045]: 2025-12-02 10:12:28.190 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:28 localhost nova_compute[281045]: 2025-12-02 10:12:28.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e197 e197: 6 total, 6 up, 6 in Dec 2 05:12:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 107 KiB/s wr, 152 op/s Dec 2 05:12:29 localhost nova_compute[281045]: 2025-12-02 10:12:29.353 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e198 e198: 6 total, 6 up, 6 in Dec 2 05:12:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:30.555 262347 INFO neutron.agent.linux.ip_lib [None req-9dac3193-bf5f-4c2e-8889-0f4528b741ac - - - - - -] Device tapdb728fdc-d8 cannot be used as it has no MAC address#033[00m Dec 2 05:12:30 localhost nova_compute[281045]: 2025-12-02 10:12:30.611 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost kernel: device tapdb728fdc-d8 entered promiscuous mode Dec 2 05:12:30 localhost NetworkManager[5967]: [1764670350.6210] manager: (tapdb728fdc-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Dec 2 05:12:30 localhost ovn_controller[153778]: 2025-12-02T10:12:30Z|00217|binding|INFO|Claiming lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc for this chassis. Dec 2 05:12:30 localhost ovn_controller[153778]: 2025-12-02T10:12:30Z|00218|binding|INFO|db728fdc-d8ab-468d-a1cd-ab731f58d6dc: Claiming unknown Dec 2 05:12:30 localhost systemd-udevd[322099]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:30 localhost nova_compute[281045]: 2025-12-02 10:12:30.624 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:30.629 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcd3979d-613c-4a99-a744-aee0cbcf87d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=db728fdc-d8ab-468d-a1cd-ab731f58d6dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:30.631 159483 INFO neutron.agent.ovn.metadata.agent [-] Port db728fdc-d8ab-468d-a1cd-ab731f58d6dc in datapath ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 bound to our chassis#033[00m Dec 2 05:12:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:30.632 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:30 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:30.632 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[31401f2e-bfb8-4d85-8725-0bd807121f4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost ovn_controller[153778]: 2025-12-02T10:12:30Z|00219|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc ovn-installed in OVS Dec 2 05:12:30 localhost ovn_controller[153778]: 2025-12-02T10:12:30Z|00220|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc up in Southbound Dec 2 05:12:30 localhost nova_compute[281045]: 2025-12-02 10:12:30.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost journal[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device Dec 2 05:12:30 localhost nova_compute[281045]: 2025-12-02 10:12:30.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost nova_compute[281045]: 2025-12-02 10:12:30.701 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:30 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:30.728 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0e034430-6b11-4bae-9254-bdf3f84dc12d, ip_allocation=immediate, mac_address=fa:16:3e:27:a3:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:22Z, description=, dns_domain=, id=0ed6501a-31af-475e-83c5-b9d22d72adda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1168865774-network, port_security_enabled=True, project_id=a1854cb9cd7e49c4a6a223acc8d74075, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20837, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2841, status=ACTIVE, subnets=['418123ed-5885-4e16-91c2-f687fe4bb883'], tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:23Z, vlan_transparent=None, network_id=0ed6501a-31af-475e-83c5-b9d22d72adda, port_security_enabled=False, project_id=a1854cb9cd7e49c4a6a223acc8d74075, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2872, status=DOWN, tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:30Z on network 0ed6501a-31af-475e-83c5-b9d22d72adda#033[00m Dec 2 05:12:30 localhost dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 1 addresses Dec 2 05:12:30 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host Dec 2 05:12:30 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts Dec 2 05:12:30 localhost podman[322179]: 2025-12-02 10:12:30.986358887 +0000 UTC m=+0.053757352 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.081 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 98 KiB/s wr, 140 op/s Dec 2 05:12:31 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:12:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.398 262347 INFO neutron.agent.dhcp.agent [None req-92fbaf53-e152-41e3-bae3-41ce2726b01b - - - - - -] DHCP configuration for ports {'0e034430-6b11-4bae-9254-bdf3f84dc12d'} is completed#033[00m Dec 2 05:12:31 localhost podman[322257]: Dec 2 05:12:31 localhost podman[322257]: 2025-12-02 10:12:31.490741855 +0000 UTC m=+0.071562689 container create ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:31 localhost systemd[1]: Started libpod-conmon-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope. Dec 2 05:12:31 localhost systemd[1]: Started libcrun container. Dec 2 05:12:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787199df7695c6c24c2e87676ed9a156fdbe63f95ead2467b2e5b6b6b47c8524/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:31 localhost podman[322257]: 2025-12-02 10:12:31.45412771 +0000 UTC m=+0.034948544 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:31 localhost podman[322257]: 2025-12-02 10:12:31.563170741 +0000 UTC m=+0.143991575 container init ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:31 localhost podman[322257]: 2025-12-02 10:12:31.56995307 +0000 UTC m=+0.150773904 container start ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:12:31 localhost dnsmasq[322288]: started, version 2.85 cachesize 150 Dec 2 05:12:31 localhost dnsmasq[322288]: DNS service limited to local subnets Dec 2 05:12:31 localhost dnsmasq[322288]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:31 localhost dnsmasq[322288]: warning: no upstream servers configured Dec 2 05:12:31 localhost dnsmasq-dhcp[322288]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:12:31 localhost dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 0 addresses Dec 2 05:12:31 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host Dec 2 05:12:31 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts Dec 2 05:12:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.627 262347 INFO neutron.agent.dhcp.agent [None req-9dac3193-bf5f-4c2e-8889-0f4528b741ac - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dd3c2d4d-793d-492e-ab43-90dc5d2cfc76, ip_allocation=immediate, mac_address=fa:16:3e:7d:7c:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:28Z, description=, dns_domain=, id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2127784479, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2862, status=ACTIVE, subnets=['7fab66c3-2c3a-4182-8b9f-a90ae9fdebc9'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:29Z, vlan_transparent=None, network_id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2874, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:30Z on network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6#033[00m Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e199 e199: 6 total, 6 up, 6 in Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch Dec 2 05:12:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.692 262347 INFO neutron.agent.linux.ip_lib [None req-047e92ce-068f-4bb6-860f-517e6bfd269f - - - - - -] Device tap11279470-8a cannot be used as it has no MAC address#033[00m Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.759 262347 INFO neutron.agent.dhcp.agent [None req-5c1b0e98-3bd5-4dcd-8b89-7eff9835a2a3 - - - - - -] DHCP configuration for ports {'74817638-3673-4c07-8de2-9aa38992d8f9'} is completed#033[00m Dec 2 05:12:31 localhost kernel: device tap11279470-8a entered promiscuous mode Dec 2 05:12:31 localhost NetworkManager[5967]: [1764670351.7666] manager: (tap11279470-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Dec 2 05:12:31 localhost systemd-udevd[322101]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.767 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:12:31 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:12:31 localhost ceph-mgr[287188]: [progress INFO root] Completed event 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:12:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:12:31 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.790 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < "" Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost journal[229262]: ethtool ioctl error on tap11279470-8a: No such device Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.827 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost nova_compute[281045]: 2025-12-02 10:12:31.853 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 1 addresses Dec 2 05:12:31 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host Dec 2 05:12:31 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts Dec 2 05:12:31 localhost podman[322323]: 2025-12-02 10:12:31.869596717 +0000 UTC m=+0.061678646 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:12:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.096 262347 INFO neutron.agent.dhcp.agent [None req-42e6126b-2fe3-4b22-8a95-9fc409c111c5 - - - - - -] DHCP configuration for ports {'dd3c2d4d-793d-492e-ab43-90dc5d2cfc76'} is completed#033[00m Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]}]': finished Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:12:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:12:32 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:12:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:12:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.315 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dd3c2d4d-793d-492e-ab43-90dc5d2cfc76, ip_allocation=immediate, mac_address=fa:16:3e:7d:7c:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:28Z, description=, dns_domain=, id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2127784479, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2862, status=ACTIVE, subnets=['7fab66c3-2c3a-4182-8b9f-a90ae9fdebc9'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:29Z, vlan_transparent=None, network_id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2874, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:30Z on network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6#033[00m Dec 2 05:12:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:32 localhost podman[322421]: 2025-12-02 10:12:32.562523948 +0000 UTC m=+0.109243288 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:32 localhost dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 1 addresses Dec 2 05:12:32 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host Dec 2 05:12:32 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts Dec 2 05:12:32 localhost podman[322448]: Dec 2 05:12:32 localhost podman[322448]: 2025-12-02 10:12:32.598999959 +0000 UTC m=+0.076596805 container create b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:12:32 localhost systemd[1]: Started libpod-conmon-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope. Dec 2 05:12:32 localhost systemd[1]: Started libcrun container. Dec 2 05:12:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761371641b9c35bd92f4d96d2b80b5565ec48e80d1083d2493cf238bafa2efcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:32 localhost podman[322448]: 2025-12-02 10:12:32.567772319 +0000 UTC m=+0.045369195 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:32 localhost podman[322448]: 2025-12-02 10:12:32.672048763 +0000 UTC m=+0.149645669 container init b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:12:32 localhost podman[322448]: 2025-12-02 10:12:32.682947618 +0000 UTC m=+0.160544494 container start b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:32 localhost dnsmasq[322473]: started, version 2.85 cachesize 150 Dec 2 05:12:32 localhost dnsmasq[322473]: DNS service limited to local subnets Dec 2 05:12:32 localhost dnsmasq[322473]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:32 localhost dnsmasq[322473]: warning: no upstream servers configured Dec 2 05:12:32 localhost dnsmasq-dhcp[322473]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:12:32 localhost dnsmasq[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/addn_hosts - 0 addresses Dec 2 05:12:32 localhost dnsmasq-dhcp[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/host Dec 2 05:12:32 localhost dnsmasq-dhcp[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/opts Dec 2 05:12:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e200 e200: 6 total, 6 up, 6 in Dec 2 05:12:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.734 262347 INFO neutron.agent.dhcp.agent [None req-894111e8-1c3c-4b4c-b907-32a7898efa78 - - - - - -] DHCP configuration for ports {'dd3c2d4d-793d-492e-ab43-90dc5d2cfc76'} is completed#033[00m Dec 2 05:12:32 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.853 262347 INFO neutron.agent.dhcp.agent [None req-a9328db3-b464-4950-b4cf-099159acb6b9 - - - - - -] DHCP configuration for ports {'48d4a17b-d013-47e9-85fa-29ec2f97b779'} is completed#033[00m Dec 2 05:12:32 localhost nova_compute[281045]: 2025-12-02 10:12:32.870 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:32 localhost dnsmasq[322473]: exiting on receipt of SIGTERM Dec 2 05:12:32 localhost podman[322490]: 2025-12-02 10:12:32.999151824 +0000 UTC m=+0.059012424 container kill b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:33 localhost systemd[1]: libpod-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope: Deactivated successfully. Dec 2 05:12:33 localhost podman[322504]: 2025-12-02 10:12:33.070577528 +0000 UTC m=+0.060545990 container died b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:12:33 localhost podman[322504]: 2025-12-02 10:12:33.108388421 +0000 UTC m=+0.098356833 container cleanup b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:33 localhost systemd[1]: libpod-conmon-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope: Deactivated successfully. Dec 2 05:12:33 localhost podman[322511]: 2025-12-02 10:12:33.159298225 +0000 UTC m=+0.133247046 container remove b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:12:33 localhost nova_compute[281045]: 2025-12-02 10:12:33.171 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:33 localhost kernel: device tap11279470-8a left promiscuous mode Dec 2 05:12:33 localhost nova_compute[281045]: 2025-12-02 10:12:33.187 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 69 KiB/s wr, 157 op/s Dec 2 05:12:33 localhost podman[322532]: 2025-12-02 10:12:33.20373053 +0000 UTC m=+0.088334976 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:12:33 localhost podman[322532]: 2025-12-02 10:12:33.213326595 +0000 UTC m=+0.097931021 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:12:33 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:12:33 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:12:33 localhost systemd[1]: tmp-crun.dRmwHR.mount: Deactivated successfully. Dec 2 05:12:33 localhost systemd[1]: var-lib-containers-storage-overlay-761371641b9c35bd92f4d96d2b80b5565ec48e80d1083d2493cf238bafa2efcc-merged.mount: Deactivated successfully. Dec 2 05:12:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.608 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0e034430-6b11-4bae-9254-bdf3f84dc12d, ip_allocation=immediate, mac_address=fa:16:3e:27:a3:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:22Z, description=, dns_domain=, id=0ed6501a-31af-475e-83c5-b9d22d72adda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1168865774-network, port_security_enabled=True, project_id=a1854cb9cd7e49c4a6a223acc8d74075, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20837, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2841, status=ACTIVE, subnets=['418123ed-5885-4e16-91c2-f687fe4bb883'], tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:23Z, vlan_transparent=None, network_id=0ed6501a-31af-475e-83c5-b9d22d72adda, port_security_enabled=False, project_id=a1854cb9cd7e49c4a6a223acc8d74075, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2872, status=DOWN, tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:30Z on network 0ed6501a-31af-475e-83c5-b9d22d72adda#033[00m Dec 2 05:12:33 localhost podman[239757]: time="2025-12-02T10:12:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:12:33 localhost nova_compute[281045]: 2025-12-02 10:12:33.640 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.655 262347 INFO neutron.agent.dhcp.agent [None req-d80070d0-e1b4-4c42-bba6-698f47362b1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:33 localhost podman[239757]: @ - - [02/Dec/2025:10:12:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160387 "" "Go-http-client/1.1" Dec 2 05:12:33 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.656 262347 INFO neutron.agent.dhcp.agent [None req-d80070d0-e1b4-4c42-bba6-698f47362b1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:33 localhost systemd[1]: run-netns-qdhcp\x2de84d56b5\x2d6863\x2d43e4\x2d89bd\x2d1291a3d50373.mount: Deactivated successfully. Dec 2 05:12:33 localhost podman[239757]: @ - - [02/Dec/2025:10:12:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1" Dec 2 05:12:33 localhost podman[322574]: 2025-12-02 10:12:33.846713987 +0000 UTC m=+0.054392782 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:33 localhost systemd[1]: tmp-crun.c55JLf.mount: Deactivated successfully. Dec 2 05:12:33 localhost dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 1 addresses Dec 2 05:12:33 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host Dec 2 05:12:33 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts Dec 2 05:12:34 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:34.095 262347 INFO neutron.agent.dhcp.agent [None req-60d0e473-6437-448e-9d6a-619b2132d567 - - - - - -] DHCP configuration for ports {'0e034430-6b11-4bae-9254-bdf3f84dc12d'} is completed#033[00m Dec 2 05:12:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 2 05:12:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} v 0) Dec 2 05:12:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch Dec 2 05:12:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch Dec 2 05:12:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]}]': finished Dec 2 05:12:35 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:35.025 2 INFO neutron.agent.securitygroups_rpc [None req-c29e5bff-b968-4a83-beb0-2b46e231db68 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:35 localhost nova_compute[281045]: 2025-12-02 10:12:35.101 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:35.101 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:35.102 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:12:35 localhost nova_compute[281045]: 2025-12-02 10:12:35.180 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s Dec 2 05:12:36 localhost nova_compute[281045]: 2025-12-02 10:12:36.083 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:36 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:36.104 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:12:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:36.287 2 INFO neutron.agent.securitygroups_rpc [None req-11289385-b413-4a50-89a7-e0a67d214908 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e201 e201: 6 total, 6 up, 6 in Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.646728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356646775, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2850, "num_deletes": 268, "total_data_size": 4650827, "memory_usage": 4791072, "flush_reason": "Manual Compaction"} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356666737, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 3040370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25302, "largest_seqno": 28146, "table_properties": {"data_size": 3028743, "index_size": 7492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27700, "raw_average_key_size": 22, "raw_value_size": 3004285, "raw_average_value_size": 2450, "num_data_blocks": 314, "num_entries": 1226, "num_filter_entries": 1226, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670242, "oldest_key_time": 1764670242, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20069 microseconds, and 7811 cpu microseconds. Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666793) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 3040370 bytes OK Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666821) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668807) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668831) EVENT_LOG_v1 {"time_micros": 1764670356668824, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4637293, prev total WAL file size 4637293, number of live WAL files 2. Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.670093) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2969KB)], [39(16MB)] Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356670231, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20006249, "oldest_snapshot_seqno": -1} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13620 keys, 18490102 bytes, temperature: kUnknown Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356778287, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18490102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18411103, "index_size": 43826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34117, "raw_key_size": 363814, "raw_average_key_size": 26, "raw_value_size": 18178196, "raw_average_value_size": 1334, "num_data_blocks": 1659, "num_entries": 13620, "num_filter_entries": 13620, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.778678) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18490102 bytes Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.780186) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 16.2 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(12.7) write-amplify(6.1) OK, records in: 14177, records dropped: 557 output_compression: NoCompression Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.780230) EVENT_LOG_v1 {"time_micros": 1764670356780201, "job": 22, "event": "compaction_finished", "compaction_time_micros": 108180, "compaction_time_cpu_micros": 55933, "output_level": 6, "num_output_files": 1, "total_output_size": 18490102, "num_input_records": 14177, "num_output_records": 13620, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356780964, "job": 22, "event": "table_file_deletion", "file_number": 41} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356783132, "job": 22, "event": "table_file_deletion", "file_number": 39} Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.669997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:12:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:12:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:12:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:12:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 e202: 6 total, 6 up, 6 in Dec 2 05:12:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:37 localhost nova_compute[281045]: 2025-12-02 10:12:37.881 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 2 05:12:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Dec 2 05:12:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch Dec 2 05:12:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:38 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180 Dec 2 05:12:38 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:12:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:38 localhost nova_compute[281045]: 2025-12-02 10:12:38.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 2 05:12:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 2 05:12:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Dec 2 05:12:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 92 KiB/s wr, 144 op/s Dec 2 05:12:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:12:39 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/681489516' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:12:39 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:39.948 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:39Z, description=, device_id=7ab6a068-34a0-43a2-9f3f-037d65a744e4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f4bff906-1f9b-4162-bd71-18028fa4ee89, ip_allocation=immediate, mac_address=fa:16:3e:63:08:8f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2932, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:12:40 localhost podman[322612]: 2025-12-02 10:12:40.169174593 +0000 UTC m=+0.057794977 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:12:40 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:12:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:12:40 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:12:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:40.466 262347 INFO neutron.agent.dhcp.agent [None req-59443a53-c9f6-41ba-b868-92507d8c3f7d - - - - - -] DHCP configuration for ports {'f4bff906-1f9b-4162-bd71-18028fa4ee89'} is completed#033[00m Dec 2 05:12:40 localhost nova_compute[281045]: 2025-12-02 10:12:40.780 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta.tmp' Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta.tmp' to config b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta' Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:41 localhost nova_compute[281045]: 2025-12-02 10:12:41.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 39 KiB/s wr, 36 op/s Dec 2 05:12:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:41.365+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9e19d1e-178b-4a98-88b5-d79880cd9496' of type subvolume Dec 2 05:12:41 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9e19d1e-178b-4a98-88b5-d79880cd9496' of type subvolume Dec 2 05:12:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "force": true, "format": "json"}]: dispatch Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496'' moved to trashcan Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta' Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:41 localhost nova_compute[281045]: 2025-12-02 10:12:41.948 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:42 localhost openstack_network_exporter[241816]: ERROR 10:12:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:12:42 localhost openstack_network_exporter[241816]: ERROR 10:12:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:42 localhost openstack_network_exporter[241816]: ERROR 10:12:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:42 localhost openstack_network_exporter[241816]: ERROR 10:12:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:12:42 localhost openstack_network_exporter[241816]: Dec 2 05:12:42 localhost openstack_network_exporter[241816]: ERROR 10:12:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:12:42 localhost openstack_network_exporter[241816]: Dec 2 05:12:42 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 2 05:12:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 88 KiB/s rd, 31 MiB/s wr, 143 op/s Dec 2 05:12:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "new_size": 2147483648, "format": "json"}]: dispatch Dec 2 05:12:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:43 localhost nova_compute[281045]: 2025-12-02 10:12:43.646 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:44 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:44.503 2 INFO neutron.agent.securitygroups_rpc [None req-4efd9dba-690c-4981-8a45-b91486e5b5eb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:44 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4951f94c-f3a4-4170-9869-8238a9dc7b72' of type subvolume Dec 2 05:12:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:44.618+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4951f94c-f3a4-4170-9869-8238a9dc7b72' of type subvolume Dec 2 05:12:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "force": true, "format": "json"}]: dispatch Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72'' moved to trashcan Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < "" Dec 2 05:12:44 localhost nova_compute[281045]: 2025-12-02 10:12:44.772 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "format": "json"}]: dispatch Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:45 localhost nova_compute[281045]: 2025-12-02 10:12:45.009 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 82 KiB/s rd, 29 MiB/s wr, 133 op/s Dec 2 05:12:46 localhost nova_compute[281045]: 2025-12-02 10:12:46.107 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e72625fe-e204-4902-a792-e35cd0c49318, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e72625fe-e204-4902-a792-e35cd0c49318, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:46 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e72625fe-e204-4902-a792-e35cd0c49318' of type subvolume Dec 2 05:12:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:46.913+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e72625fe-e204-4902-a792-e35cd0c49318' of type subvolume Dec 2 05:12:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "force": true, "format": "json"}]: dispatch Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318'' moved to trashcan Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < "" Dec 2 05:12:47 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:47.116 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 70 KiB/s rd, 24 MiB/s wr, 114 op/s Dec 2 05:12:47 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:47.328 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:47 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:47.809 2 INFO neutron.agent.securitygroups_rpc [None req-2a241596-c202-4fbe-a6b6-7dd1095be821 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:48 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "format": "json"}]: dispatch Dec 2 05:12:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:48 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:48.422 2 INFO neutron.agent.securitygroups_rpc [None req-2a4c2d24-b746-4ff5-abcb-d93c9952342d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:48 localhost nova_compute[281045]: 2025-12-02 10:12:48.649 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 108 KiB/s rd, 56 MiB/s wr, 185 op/s Dec 2 05:12:49 localhost nova_compute[281045]: 2025-12-02 10:12:49.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta.tmp' Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta.tmp' to config b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta' Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:12:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:12:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:12:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:12:51 localhost podman[322635]: 2025-12-02 10:12:51.109776008 +0000 UTC m=+0.075026877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:12:51 localhost nova_compute[281045]: 2025-12-02 10:12:51.146 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:51 localhost podman[322642]: 2025-12-02 10:12:51.170327178 +0000 UTC m=+0.135673800 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:12:51 localhost podman[322635]: 2025-12-02 10:12:51.175910959 +0000 UTC m=+0.141161848 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:12:51 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:12:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 86 KiB/s rd, 53 MiB/s wr, 151 op/s Dec 2 05:12:51 localhost podman[322634]: 2025-12-02 10:12:51.222611905 +0000 UTC m=+0.221639491 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:12:51 localhost podman[322642]: 2025-12-02 10:12:51.25111997 +0000 UTC m=+0.216466672 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:51 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:12:51 localhost podman[322636]: 2025-12-02 10:12:51.3318041 +0000 UTC m=+0.311120051 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 05:12:51 localhost podman[322634]: 2025-12-02 10:12:51.356802818 +0000 UTC m=+0.355830494 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Dec 2 05:12:51 localhost podman[322636]: 2025-12-02 10:12:51.365901077 +0000 UTC m=+0.345217048 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:12:51 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:12:51 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:12:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58", "force": true, "format": "json"}]: dispatch Dec 2 05:12:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' Dec 2 05:12:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta' Dec 2 05:12:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "force": true, "format": "json"}]: dispatch Dec 2 05:12:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' Dec 2 05:12:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta' Dec 2 05:12:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:52 localhost nova_compute[281045]: 2025-12-02 10:12:52.334 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e203 e203: 6 total, 6 up, 6 in Dec 2 05:12:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 122 KiB/s rd, 80 MiB/s wr, 215 op/s Dec 2 05:12:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "new_size": 2147483648, "format": "json"}]: dispatch Dec 2 05:12:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:53 localhost nova_compute[281045]: 2025-12-02 10:12:53.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e204 e204: 6 total, 6 up, 6 in Dec 2 05:12:53 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:53.791 2 INFO neutron.agent.securitygroups_rpc [None req-099086ea-fe05-4147-b538-a150ea38436a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e205 e205: 6 total, 6 up, 6 in Dec 2 05:12:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136", "force": true, "format": "json"}]: dispatch Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta' Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "force": true, "format": "json"}]: dispatch Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:55 localhost neutron_sriov_agent[255428]: 2025-12-02 10:12:55.084 2 INFO neutron.agent.securitygroups_rpc [None req-51b03213-9479-4e3e-bbd0-ea81e004b8c6 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta' Dec 2 05:12:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 118 KiB/s rd, 67 MiB/s wr, 199 op/s Dec 2 05:12:55 localhost nova_compute[281045]: 2025-12-02 10:12:55.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:55 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e206 e206: 6 total, 6 up, 6 in Dec 2 05:12:56 localhost nova_compute[281045]: 2025-12-02 10:12:56.147 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:56 localhost dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 0 addresses Dec 2 05:12:56 localhost podman[322733]: 2025-12-02 10:12:56.250628078 +0000 UTC m=+0.064252035 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:12:56 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host Dec 2 05:12:56 localhost dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts Dec 2 05:12:56 localhost nova_compute[281045]: 2025-12-02 10:12:56.420 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:56 localhost ovn_controller[153778]: 2025-12-02T10:12:56Z|00221|binding|INFO|Releasing lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc from this chassis (sb_readonly=0) Dec 2 05:12:56 localhost ovn_controller[153778]: 2025-12-02T10:12:56Z|00222|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc down in Southbound Dec 2 05:12:56 localhost kernel: device tapdb728fdc-d8 left promiscuous mode Dec 2 05:12:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:56.430 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcd3979d-613c-4a99-a744-aee0cbcf87d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=db728fdc-d8ab-468d-a1cd-ab731f58d6dc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:56.432 159483 INFO neutron.agent.ovn.metadata.agent [-] Port db728fdc-d8ab-468d-a1cd-ab731f58d6dc in datapath ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 unbound from our chassis#033[00m Dec 2 05:12:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:56.434 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:56.435 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7fea05c5-7125-4c1b-9fcc-08909b7da4f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:56 localhost nova_compute[281045]: 2025-12-02 10:12:56.448 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:56 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '977a2594-0007-4fab-a7e2-b6bc2dee3113' of type subvolume Dec 2 05:12:56 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:56.655+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '977a2594-0007-4fab-a7e2-b6bc2dee3113' of type subvolume Dec 2 05:12:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "force": true, "format": "json"}]: dispatch Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113'' moved to trashcan Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < "" Dec 2 05:12:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e207 e207: 6 total, 6 up, 6 in Dec 2 05:12:56 localhost dnsmasq[322288]: exiting on receipt of SIGTERM Dec 2 05:12:56 localhost podman[322771]: 2025-12-02 10:12:56.819929091 +0000 UTC m=+0.061231283 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:12:56 localhost systemd[1]: libpod-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope: Deactivated successfully. Dec 2 05:12:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:12:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:12:56 localhost podman[322787]: 2025-12-02 10:12:56.903053496 +0000 UTC m=+0.060932924 container died ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:12:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:56 localhost systemd[1]: var-lib-containers-storage-overlay-787199df7695c6c24c2e87676ed9a156fdbe63f95ead2467b2e5b6b6b47c8524-merged.mount: Deactivated successfully. Dec 2 05:12:56 localhost podman[322787]: 2025-12-02 10:12:56.944196969 +0000 UTC m=+0.102076317 container remove ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:12:56 localhost systemd[1]: libpod-conmon-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope: Deactivated successfully. Dec 2 05:12:56 localhost podman[322793]: 2025-12-02 10:12:56.984847979 +0000 UTC m=+0.138866649 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:12:57 localhost podman[322795]: 2025-12-02 10:12:57.048605268 +0000 UTC m=+0.197066417 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64) Dec 2 05:12:57 localhost podman[322793]: 2025-12-02 10:12:57.074407451 +0000 UTC m=+0.228426181 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:12:57 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:12:57 localhost podman[322795]: 2025-12-02 10:12:57.091936379 +0000 UTC m=+0.240397588 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.) Dec 2 05:12:57 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:12:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail Dec 2 05:12:57 localhost systemd[1]: run-netns-qdhcp\x2dca9eef71\x2d1213\x2d4a2c\x2d90d0\x2dcfc01ce50fc6.mount: Deactivated successfully. Dec 2 05:12:57 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.269 262347 INFO neutron.agent.dhcp.agent [None req-3522d6ba-0fd8-426b-b188-90d535cb028c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:57 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.270 262347 INFO neutron.agent.dhcp.agent [None req-3522d6ba-0fd8-426b-b188-90d535cb028c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:57 localhost podman[322871]: 2025-12-02 10:12:57.385562881 +0000 UTC m=+0.054432084 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:12:57 localhost dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 0 addresses Dec 2 05:12:57 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host Dec 2 05:12:57 localhost dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts Dec 2 05:12:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:57 localhost nova_compute[281045]: 2025-12-02 10:12:57.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:57 localhost kernel: device tap2f12f501-39 left promiscuous mode Dec 2 05:12:57 localhost ovn_controller[153778]: 2025-12-02T10:12:57Z|00223|binding|INFO|Releasing lport 2f12f501-3942-418c-89e6-d03f08b5b903 from this chassis (sb_readonly=0) Dec 2 05:12:57 localhost ovn_controller[153778]: 2025-12-02T10:12:57Z|00224|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 down in Southbound Dec 2 05:12:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:57.665 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a1854cb9cd7e49c4a6a223acc8d74075', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=290a207a-1202-4b25-ae81-ba8a96163204, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2f12f501-3942-418c-89e6-d03f08b5b903) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:57.667 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 2f12f501-3942-418c-89e6-d03f08b5b903 in datapath 0ed6501a-31af-475e-83c5-b9d22d72adda unbound from our chassis#033[00m Dec 2 05:12:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:57.670 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ed6501a-31af-475e-83c5-b9d22d72adda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:57 localhost ovn_metadata_agent[159477]: 2025-12-02 10:12:57.671 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6e6cd0-cc3c-4d28-b1cf-25ddf87c759e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:57 localhost nova_compute[281045]: 2025-12-02 10:12:57.682 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:57 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.755 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e208 e208: 6 total, 6 up, 6 in Dec 2 05:12:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8115277a-c4bb-4c47-9857-029dcd8c9879, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8115277a-c4bb-4c47-9857-029dcd8c9879, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:12:58 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115277a-c4bb-4c47-9857-029dcd8c9879' of type subvolume Dec 2 05:12:58 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:58.235+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115277a-c4bb-4c47-9857-029dcd8c9879' of type subvolume Dec 2 05:12:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "force": true, "format": "json"}]: dispatch Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879'' moved to trashcan Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:12:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < "" Dec 2 05:12:58 localhost nova_compute[281045]: 2025-12-02 10:12:58.290 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:58 localhost nova_compute[281045]: 2025-12-02 10:12:58.742 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:58 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e209 e209: 6 total, 6 up, 6 in Dec 2 05:12:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 173 KiB/s rd, 90 KiB/s wr, 304 op/s Dec 2 05:12:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:12:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta.tmp' Dec 2 05:13:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta.tmp' to config b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta' Dec 2 05:13:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch Dec 2 05:13:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:00 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e210 e210: 6 total, 6 up, 6 in Dec 2 05:13:01 localhost nova_compute[281045]: 2025-12-02 10:13:01.151 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 156 KiB/s rd, 82 KiB/s wr, 274 op/s Dec 2 05:13:01 localhost podman[322911]: 2025-12-02 10:13:01.625934735 +0000 UTC m=+0.068996872 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:13:01 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:13:01 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:13:01 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:13:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e211 e211: 6 total, 6 up, 6 in Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.666512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381666575, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 746, "num_deletes": 261, "total_data_size": 736960, "memory_usage": 750920, "flush_reason": "Manual Compaction"} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381677275, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 482171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28151, "largest_seqno": 28892, "table_properties": {"data_size": 478562, "index_size": 1400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9028, "raw_average_key_size": 19, "raw_value_size": 470871, "raw_average_value_size": 1037, "num_data_blocks": 60, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670356, "oldest_key_time": 1764670356, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10821 microseconds, and 2431 cpu microseconds. Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.677328) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 482171 bytes OK Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.677355) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687321) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687368) EVENT_LOG_v1 {"time_micros": 1764670381687357, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687399) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 732773, prev total WAL file size 732773, number of live WAL files 2. Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.688121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end) Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(470KB)], [42(17MB)] Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381688162, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18972273, "oldest_snapshot_seqno": -1} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13532 keys, 18454852 bytes, temperature: kUnknown Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381786627, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 18454852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18376674, "index_size": 43261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33861, "raw_key_size": 363228, "raw_average_key_size": 26, "raw_value_size": 18145442, "raw_average_value_size": 1340, "num_data_blocks": 1625, "num_entries": 13532, "num_filter_entries": 13532, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.786937) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 18454852 bytes Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.788935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 187.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 17.6 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(77.6) write-amplify(38.3) OK, records in: 14074, records dropped: 542 output_compression: NoCompression Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.788965) EVENT_LOG_v1 {"time_micros": 1764670381788952, "job": 24, "event": "compaction_finished", "compaction_time_micros": 98553, "compaction_time_cpu_micros": 45418, "output_level": 6, "num_output_files": 1, "total_output_size": 18454852, "num_input_records": 14074, "num_output_records": 13532, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381789212, "job": 24, "event": "table_file_deletion", "file_number": 44} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381792284, "job": 24, "event": "table_file_deletion", "file_number": 42} Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.688031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost nova_compute[281045]: 2025-12-02 10:13:01.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:02 localhost podman[322950]: 2025-12-02 10:13:02.822670237 +0000 UTC m=+0.063992908 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:13:02 localhost dnsmasq[321974]: exiting on receipt of SIGTERM Dec 2 05:13:02 localhost systemd[1]: libpod-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope: Deactivated successfully. Dec 2 05:13:02 localhost podman[322965]: 2025-12-02 10:13:02.896982249 +0000 UTC m=+0.056706993 container died b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:13:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:02 localhost podman[322965]: 2025-12-02 10:13:02.933970647 +0000 UTC m=+0.093695361 container cleanup b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:13:02 localhost systemd[1]: libpod-conmon-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope: Deactivated successfully. Dec 2 05:13:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e212 e212: 6 total, 6 up, 6 in Dec 2 05:13:02 localhost podman[322964]: 2025-12-02 10:13:02.997483548 +0000 UTC m=+0.153688894 container remove b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:13:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:03.030 262347 INFO neutron.agent.dhcp.agent [None req-e34b8e17-30d4-43a1-a30d-c3a69e5a6527 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 56 KiB/s wr, 125 op/s Dec 2 05:13:03 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:03.274 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:03.385+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60' of type subvolume Dec 2 05:13:03 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60' of type subvolume Dec 2 05:13:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "force": true, "format": "json"}]: dispatch Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60'' moved to trashcan Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:13:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < "" Dec 2 05:13:03 localhost podman[239757]: time="2025-12-02T10:13:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:13:03 localhost podman[239757]: @ - - [02/Dec/2025:10:13:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:13:03 localhost podman[239757]: @ - - [02/Dec/2025:10:13:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19238 "" "Go-http-client/1.1" Dec 2 05:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:13:03 localhost nova_compute[281045]: 2025-12-02 10:13:03.785 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:03 localhost systemd[1]: var-lib-containers-storage-overlay-2824933fa3ea201d2880e13a9881e7be88073d7cf00c05154858ec9e5ab1017d-merged.mount: Deactivated successfully. Dec 2 05:13:03 localhost systemd[1]: run-netns-qdhcp\x2d0ed6501a\x2d31af\x2d475e\x2d83c5\x2db9d22d72adda.mount: Deactivated successfully. Dec 2 05:13:03 localhost podman[322994]: 2025-12-02 10:13:03.829267676 +0000 UTC m=+0.080892247 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:03 localhost podman[322994]: 2025-12-02 10:13:03.865783218 +0000 UTC m=+0.117407789 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:13:03 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:13:04 localhost nova_compute[281045]: 2025-12-02 10:13:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:04 localhost nova_compute[281045]: 2025-12-02 10:13:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:13:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:04.690 2 INFO neutron.agent.securitygroups_rpc [None req-87ab1b98-cb82-4479-9190-360042c3aeed 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:04 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:04.986 2 INFO neutron.agent.securitygroups_rpc [None req-c2e2f39f-f9d0-4681-8662-02fbec20bbf5 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e213 e213: 6 total, 6 up, 6 in Dec 2 05:13:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 58 KiB/s wr, 127 op/s Dec 2 05:13:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:05.407 2 INFO neutron.agent.securitygroups_rpc [None req-2bcc8101-e89e-4e48-9cb1-9d691b1fbb0a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:05 localhost nova_compute[281045]: 2025-12-02 10:13:05.539 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:05 localhost nova_compute[281045]: 2025-12-02 10:13:05.540 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:05 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:05.766 2 INFO neutron.agent.securitygroups_rpc [None req-af45566d-4b46-4c57-afce-d4fa5a30fbdd 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.155 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:06 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:06.260 2 INFO neutron.agent.securitygroups_rpc [None req-4587ba21-504d-4c76-ba2e-bc511139041d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:06 localhost podman[323030]: 2025-12-02 10:13:06.479807417 +0000 UTC m=+0.062788459 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:13:06 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:13:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:13:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.559 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.561 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:13:06 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:06 localhost nova_compute[281045]: 2025-12-02 10:13:06.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e214 e214: 6 total, 6 up, 6 in Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta.tmp' Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta.tmp' to config b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta' Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:06 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:13:06 Dec 2 05:13:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:13:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:13:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'images', 'backups', '.mgr', 'manila_metadata', 'manila_data', 'volumes'] Dec 2 05:13:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:13:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/641882817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.035 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 44 KiB/s wr, 98 op/s Dec 2 05:13:07 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:07.211 2 INFO neutron.agent.securitygroups_rpc [None req-5417d582-9df9-4660-b1a1-ab7e1fcb97ff 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.210 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.213 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11496MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.214 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.214 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:13:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005662535769123395 of space, bias 4.0, pg target 0.45073784722222227 quantized to 16 (current 16) Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:13:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta' Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.516 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.516 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:13:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.575 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.652 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.653 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.671 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.696 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:13:07 localhost nova_compute[281045]: 2025-12-02 10:13:07.713 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:13:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:13:08 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3210582778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.177 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.181 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.199 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.201 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:08 localhost nova_compute[281045]: 2025-12-02 10:13:08.830 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e215 e215: 6 total, 6 up, 6 in Dec 2 05:13:09 localhost nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:09 localhost nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:09 localhost nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 51 KiB/s wr, 113 op/s Dec 2 05:13:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:09.936+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1abd7d7a-1fad-4e16-a25e-c36a0784c2b0' of type subvolume Dec 2 05:13:09 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1abd7d7a-1fad-4e16-a25e-c36a0784c2b0' of type subvolume Dec 2 05:13:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "force": true, "format": "json"}]: dispatch Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0'' moved to trashcan Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:13:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < "" Dec 2 05:13:10 localhost nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:10 localhost nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:13:10 localhost nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:13:10 localhost nova_compute[281045]: 2025-12-02 10:13:10.553 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:13:10 localhost nova_compute[281045]: 2025-12-02 10:13:10.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "format": "json"}]: dispatch Dec 2 05:13:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:11 localhost nova_compute[281045]: 2025-12-02 10:13:11.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 50 KiB/s wr, 110 op/s Dec 2 05:13:11 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:11.353 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39bb33c3-24c9-42a5-b452-f2a8901739e7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f6a41283-fc6f-4680-bb13-d9b38f4d32ad) old=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:11.355 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f6a41283-fc6f-4680-bb13-d9b38f4d32ad in datapath e625cddc-8a19-4455-8def-acda09527180 updated#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:11.357 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e625cddc-8a19-4455-8def-acda09527180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:11.358 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[48236fc2-2d86-47ff-91d2-1da9728cffd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:11 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:11 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e216 e216: 6 total, 6 up, 6 in Dec 2 05:13:12 localhost openstack_network_exporter[241816]: ERROR 10:13:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:13:12 localhost openstack_network_exporter[241816]: ERROR 10:13:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:12 localhost openstack_network_exporter[241816]: ERROR 10:13:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:12 localhost openstack_network_exporter[241816]: ERROR 10:13:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:13:12 localhost openstack_network_exporter[241816]: Dec 2 05:13:12 localhost openstack_network_exporter[241816]: ERROR 10:13:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:13:12 localhost openstack_network_exporter[241816]: Dec 2 05:13:12 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:12.475 2 INFO neutron.agent.securitygroups_rpc [None req-7de79900-a7b1-40d4-917a-526ff9de5d92 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:12 localhost nova_compute[281045]: 2025-12-02 10:13:12.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:12 localhost nova_compute[281045]: 2025-12-02 10:13:12.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:13:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:13.014 2 INFO neutron.agent.securitygroups_rpc [None req-bfd5de53-2a30-4cbd-a16d-14779023f72a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "format": "json"}]: dispatch Dec 2 05:13:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e217 e217: 6 total, 6 up, 6 in Dec 2 05:13:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 149 KiB/s rd, 76 KiB/s wr, 204 op/s Dec 2 05:13:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:13.619 2 INFO neutron.agent.securitygroups_rpc [None req-c543e5d9-b160-4e67-9d20-7dd20a3ffb2c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:13 localhost nova_compute[281045]: 2025-12-02 10:13:13.861 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:14 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:14.261 2 INFO neutron.agent.securitygroups_rpc [None req-654147e9-be5d-4ad5-89df-552977a60280 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:14 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:14 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:14 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2", "force": true, "format": "json"}]: dispatch Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta' Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "force": true, "format": "json"}]: dispatch Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta' Dec 2 05:13:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 24 KiB/s wr, 89 op/s Dec 2 05:13:16 localhost nova_compute[281045]: 2025-12-02 10:13:16.163 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:16 localhost nova_compute[281045]: 2025-12-02 10:13:16.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:16 localhost nova_compute[281045]: 2025-12-02 10:13:16.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:13:16 localhost nova_compute[281045]: 2025-12-02 10:13:16.551 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:13:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9", "force": true, "format": "json"}]: dispatch Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "force": true, "format": "json"}]: dispatch Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 18 KiB/s wr, 68 op/s Dec 2 05:13:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:17 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1eb437a1-55f1-4e1b-ab0c-b43c26904e3d' of type subvolume Dec 2 05:13:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:17.595+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1eb437a1-55f1-4e1b-ab0c-b43c26904e3d' of type subvolume Dec 2 05:13:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "force": true, "format": "json"}]: dispatch Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d'' moved to trashcan Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:13:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < "" Dec 2 05:13:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e218 e218: 6 total, 6 up, 6 in Dec 2 05:13:18 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e219 e219: 6 total, 6 up, 6 in Dec 2 05:13:18 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:18.829 2 INFO neutron.agent.securitygroups_rpc [None req-809c0752-7390-4fe2-a50b-599cd97feccb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:18 localhost nova_compute[281045]: 2025-12-02 10:13:18.896 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 71 KiB/s wr, 117 op/s Dec 2 05:13:19 localhost podman[323111]: 2025-12-02 10:13:19.3329361 +0000 UTC m=+0.066116172 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:19 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:13:19 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:13:19 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:13:19 localhost nova_compute[281045]: 2025-12-02 10:13:19.573 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:19 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:19.816 2 INFO neutron.agent.securitygroups_rpc [None req-86a7997e-3c35-4f55-af22-9588e0545ddf 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:20.546 2 INFO neutron.agent.securitygroups_rpc [None req-173c0738-50d3-48cc-af5a-b78421c8e23c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:20 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:20.979 2 INFO neutron.agent.securitygroups_rpc [None req-a615b87d-8e60-4d08-9004-5c448f1ed91b 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:21 localhost nova_compute[281045]: 2025-12-02 10:13:21.196 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 53 KiB/s wr, 88 op/s Dec 2 05:13:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e220 e220: 6 total, 6 up, 6 in Dec 2 05:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:13:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:13:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "format": "json"}]: dispatch Dec 2 05:13:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:22 localhost podman[323131]: 2025-12-02 10:13:22.086315492 +0000 UTC m=+0.087511370 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 05:13:22 localhost systemd[1]: tmp-crun.QJdCpA.mount: Deactivated successfully. Dec 2 05:13:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:22 localhost podman[323132]: 2025-12-02 10:13:22.101222291 +0000 UTC m=+0.098231011 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:13:22 localhost podman[323133]: 2025-12-02 10:13:22.146651376 +0000 UTC m=+0.137522966 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:13:22 localhost podman[323139]: 2025-12-02 10:13:22.166243688 +0000 UTC m=+0.150723272 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 05:13:22 localhost podman[323131]: 2025-12-02 10:13:22.173172381 +0000 UTC m=+0.174368309 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:13:22 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:13:22 localhost podman[323132]: 2025-12-02 10:13:22.188960426 +0000 UTC m=+0.185969176 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:13:22 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:13:22 localhost podman[323139]: 2025-12-02 10:13:22.207731463 +0000 UTC m=+0.192211077 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:22 localhost podman[323133]: 2025-12-02 10:13:22.233071652 +0000 UTC m=+0.223943182 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 2 05:13:22 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:13:22 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:13:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 82 KiB/s wr, 124 op/s Dec 2 05:13:23 localhost nova_compute[281045]: 2025-12-02 10:13:23.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:23 localhost nova_compute[281045]: 2025-12-02 10:13:23.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:24.100 2 INFO neutron.agent.securitygroups_rpc [None req-1c76d5fd-a4fe-47e2-aa2d-3afed3d7786f 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:24 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:24.346 262347 INFO neutron.agent.linux.ip_lib [None req-0de97030-fe51-4340-a1d2-e26ef3f59dda - - - - - -] Device tapf15aee39-f3 cannot be used as it has no MAC address#033[00m Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.370 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost kernel: device tapf15aee39-f3 entered promiscuous mode Dec 2 05:13:24 localhost NetworkManager[5967]: [1764670404.3812] manager: (tapf15aee39-f3): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Dec 2 05:13:24 localhost ovn_controller[153778]: 2025-12-02T10:13:24Z|00225|binding|INFO|Claiming lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 for this chassis. Dec 2 05:13:24 localhost ovn_controller[153778]: 2025-12-02T10:13:24Z|00226|binding|INFO|f15aee39-f3c5-43c5-8331-48f6ffa03ae6: Claiming unknown Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost systemd-udevd[323224]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:13:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:24.392 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7028d62-160f-453a-9151-4aa69f4e4388, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f15aee39-f3c5-43c5-8331-48f6ffa03ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:24.394 159483 INFO neutron.agent.ovn.metadata.agent [-] Port f15aee39-f3c5-43c5-8331-48f6ffa03ae6 in datapath 689e2fb8-60b0-49ed-bf14-a87677f003a5 bound to our chassis#033[00m Dec 2 05:13:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:24.396 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port b87ab23f-af15-44a2-85b2-a53866079edf IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:13:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:24.397 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689e2fb8-60b0-49ed-bf14-a87677f003a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:24 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:24.397 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a84b25bb-e90b-4e0b-9e24-586e17f5aa57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.418 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost ovn_controller[153778]: 2025-12-02T10:13:24Z|00227|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 ovn-installed in OVS Dec 2 05:13:24 localhost ovn_controller[153778]: 2025-12-02T10:13:24Z|00228|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 up in Southbound Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.424 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.426 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost journal[229262]: ethtool ioctl error on tapf15aee39-f3: No such device Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:24 localhost nova_compute[281045]: 2025-12-02 10:13:24.475 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 66 KiB/s wr, 99 op/s Dec 2 05:13:25 localhost podman[323295]: Dec 2 05:13:25 localhost podman[323295]: 2025-12-02 10:13:25.341790982 +0000 UTC m=+0.059997584 container create f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:13:25 localhost systemd[1]: Started libpod-conmon-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope. Dec 2 05:13:25 localhost systemd[1]: Started libcrun container. Dec 2 05:13:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f5f02ccadbec04bd08e00844ffed85a529fb4b80bc5fbf08b5b1862ed14228/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:25 localhost podman[323295]: 2025-12-02 10:13:25.31045988 +0000 UTC m=+0.028666472 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:25 localhost podman[323295]: 2025-12-02 10:13:25.413746993 +0000 UTC m=+0.131953575 container init f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:13:25 localhost systemd[1]: tmp-crun.F4yA1N.mount: Deactivated successfully. Dec 2 05:13:25 localhost podman[323295]: 2025-12-02 10:13:25.42144119 +0000 UTC m=+0.139647822 container start f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:13:25 localhost dnsmasq[323313]: started, version 2.85 cachesize 150 Dec 2 05:13:25 localhost dnsmasq[323313]: DNS service limited to local subnets Dec 2 05:13:25 localhost dnsmasq[323313]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:25 localhost dnsmasq[323313]: warning: no upstream servers configured Dec 2 05:13:25 localhost dnsmasq-dhcp[323313]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:25 localhost dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 0 addresses Dec 2 05:13:25 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host Dec 2 05:13:25 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts Dec 2 05:13:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.482 262347 INFO neutron.agent.dhcp.agent [None req-070bb103-3d0c-40ca-b063-f53a13f625be - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd659161-3401-46de-89bb-6bc014b75b2f, ip_allocation=immediate, mac_address=fa:16:3e:5f:95:4d, name=tempest-PortsTestJSON-1991715511, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:13:22Z, description=, dns_domain=, id=689e2fb8-60b0-49ed-bf14-a87677f003a5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1651050851, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50437, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3112, status=ACTIVE, subnets=['9e76e82a-6a74-4080-8429-de5a530068cf'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:22Z, vlan_transparent=None, network_id=689e2fb8-60b0-49ed-bf14-a87677f003a5, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3139, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:23Z on network 689e2fb8-60b0-49ed-bf14-a87677f003a5#033[00m Dec 2 05:13:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "format": "json"}]: dispatch Dec 2 05:13:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.592 262347 INFO neutron.agent.dhcp.agent [None req-9b93fd14-9258-4557-862f-f1a6211b5023 - - - - - -] DHCP configuration for ports {'b3f2ab16-9924-4121-b481-6649f6786325'} is completed#033[00m Dec 2 05:13:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:25 localhost dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 1 addresses Dec 2 05:13:25 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host Dec 2 05:13:25 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts Dec 2 05:13:25 localhost podman[323330]: 2025-12-02 10:13:25.753585825 +0000 UTC m=+0.064579495 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:13:25 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.957 262347 INFO neutron.agent.dhcp.agent [None req-69e49db5-98ea-4402-a4ef-a0f7f5a098bd - - - - - -] DHCP configuration for ports {'cd659161-3401-46de-89bb-6bc014b75b2f'} is completed#033[00m Dec 2 05:13:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:26.033 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:23Z, description=, device_id=d65b4168-a26b-402d-b6c6-567c639808fe, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd659161-3401-46de-89bb-6bc014b75b2f, ip_allocation=immediate, mac_address=fa:16:3e:5f:95:4d, name=tempest-PortsTestJSON-1991715511, network_id=689e2fb8-60b0-49ed-bf14-a87677f003a5, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3139, status=ACTIVE, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:25Z on network 689e2fb8-60b0-49ed-bf14-a87677f003a5#033[00m Dec 2 05:13:26 localhost sshd[323368]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:13:26 localhost nova_compute[281045]: 2025-12-02 10:13:26.227 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:26 localhost podman[323370]: 2025-12-02 10:13:26.269019753 +0000 UTC m=+0.061642376 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:13:26 localhost dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 1 addresses Dec 2 05:13:26 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host Dec 2 05:13:26 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts Dec 2 05:13:26 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:26.518 262347 INFO neutron.agent.dhcp.agent [None req-0f815766-69a0-42ad-a679-f5d8d64c20ec - - - - - -] DHCP configuration for ports {'cd659161-3401-46de-89bb-6bc014b75b2f'} is completed#033[00m Dec 2 05:13:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 e221: 6 total, 6 up, 6 in Dec 2 05:13:26 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:26.678 2 INFO neutron.agent.securitygroups_rpc [None req-03a5d6f5-e9fc-4da2-b77f-f6e56b5ab3f7 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:26 localhost systemd[1]: tmp-crun.duMxgT.mount: Deactivated successfully. Dec 2 05:13:26 localhost dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 0 addresses Dec 2 05:13:26 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host Dec 2 05:13:26 localhost dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts Dec 2 05:13:26 localhost podman[323408]: 2025-12-02 10:13:26.89769353 +0000 UTC m=+0.047731238 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:27 localhost nova_compute[281045]: 2025-12-02 10:13:27.079 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:27 localhost ovn_controller[153778]: 2025-12-02T10:13:27Z|00229|binding|INFO|Releasing lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 from this chassis (sb_readonly=0) Dec 2 05:13:27 localhost kernel: device tapf15aee39-f3 left promiscuous mode Dec 2 05:13:27 localhost ovn_controller[153778]: 2025-12-02T10:13:27Z|00230|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 down in Southbound Dec 2 05:13:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:27.087 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7028d62-160f-453a-9151-4aa69f4e4388, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f15aee39-f3c5-43c5-8331-48f6ffa03ae6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:27.088 159483 INFO neutron.agent.ovn.metadata.agent [-] Port f15aee39-f3c5-43c5-8331-48f6ffa03ae6 in datapath 689e2fb8-60b0-49ed-bf14-a87677f003a5 unbound from our chassis#033[00m Dec 2 05:13:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:27.089 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689e2fb8-60b0-49ed-bf14-a87677f003a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:27 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:27.090 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9c9ab1-be23-4294-827a-e1c92b1e4fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:27 localhost nova_compute[281045]: 2025-12-02 10:13:27.105 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:27 localhost nova_compute[281045]: 2025-12-02 10:13:27.106 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 8.6 KiB/s wr, 5 op/s Dec 2 05:13:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:27 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:27 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:27 localhost dnsmasq[323313]: exiting on receipt of SIGTERM Dec 2 05:13:27 localhost podman[323447]: 2025-12-02 10:13:27.615096753 +0000 UTC m=+0.044790957 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:13:27 localhost systemd[1]: libpod-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope: Deactivated successfully. Dec 2 05:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:13:27 localhost podman[323460]: 2025-12-02 10:13:27.699228108 +0000 UTC m=+0.068716573 container died f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:13:27 localhost podman[323460]: 2025-12-02 10:13:27.731348955 +0000 UTC m=+0.100837390 container cleanup f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:13:27 localhost systemd[1]: libpod-conmon-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope: Deactivated successfully. Dec 2 05:13:27 localhost podman[323462]: 2025-12-02 10:13:27.748874804 +0000 UTC m=+0.110345322 container remove f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:13:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:27.772 262347 INFO neutron.agent.dhcp.agent [None req-84945643-cf33-4819-8941-2aea5216c05a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:27 localhost podman[323474]: 2025-12-02 10:13:27.800544351 +0000 UTC m=+0.132730489 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Dec 2 05:13:27 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:13:27.816 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:27 localhost podman[323473]: 2025-12-02 10:13:27.780717072 +0000 UTC m=+0.119303467 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:13:27 localhost podman[323473]: 2025-12-02 10:13:27.866073704 +0000 UTC m=+0.204660109 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:13:27 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:13:27 localhost podman[323474]: 2025-12-02 10:13:27.884863252 +0000 UTC m=+0.217049420 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm) Dec 2 05:13:27 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:13:28 localhost nova_compute[281045]: 2025-12-02 10:13:28.014 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:28 localhost systemd[1]: var-lib-containers-storage-overlay-d4f5f02ccadbec04bd08e00844ffed85a529fb4b80bc5fbf08b5b1862ed14228-merged.mount: Deactivated successfully. Dec 2 05:13:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:28 localhost systemd[1]: run-netns-qdhcp\x2d689e2fb8\x2d60b0\x2d49ed\x2dbf14\x2da87677f003a5.mount: Deactivated successfully. Dec 2 05:13:28 localhost nova_compute[281045]: 2025-12-02 10:13:28.926 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 27 KiB/s wr, 46 op/s Dec 2 05:13:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "format": "json"}]: dispatch Dec 2 05:13:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:30 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:30.049 2 INFO neutron.agent.securitygroups_rpc [None req-fd3a2c94-7b8a-4692-92f4-6e53ae5bb9ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1']#033[00m Dec 2 05:13:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s Dec 2 05:13:31 localhost nova_compute[281045]: 2025-12-02 10:13:31.278 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:32.443 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:32.445 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:32.447 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:32.448 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[29ac695f-cf82-47a4-8130-ab087af71fc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:32 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "format": "json"}]: dispatch Dec 2 05:13:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:33.040 2 INFO neutron.agent.securitygroups_rpc [None req-2f13aa5e-1f09-4188-af19-a84ba5538b10 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1', '19b93206-6bbf-441b-abe9-609f462663ba']#033[00m Dec 2 05:13:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:13:33 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:13:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:13:33 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:13:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:13:33 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:13:33 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:13:33 localhost ceph-mgr[287188]: [progress INFO root] Completed event 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:13:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:13:33 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:13:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s Dec 2 05:13:33 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:33.506 2 INFO neutron.agent.securitygroups_rpc [None req-af664360-5183-46be-8a67-9553906db0ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['19b93206-6bbf-441b-abe9-609f462663ba']#033[00m Dec 2 05:13:33 localhost podman[239757]: time="2025-12-02T10:13:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:13:33 localhost podman[239757]: @ - - [02/Dec/2025:10:13:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:13:33 localhost podman[239757]: @ - - [02/Dec/2025:10:13:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19232 "" "Go-http-client/1.1" Dec 2 05:13:33 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:13:33 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:13:33 localhost nova_compute[281045]: 2025-12-02 10:13:33.931 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:13:34 localhost podman[323611]: 2025-12-02 10:13:34.086981423 +0000 UTC m=+0.085953392 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:13:34 localhost podman[323611]: 2025-12-02 10:13:34.100151898 +0000 UTC m=+0.099123897 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:13:34 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:13:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s Dec 2 05:13:35 localhost nova_compute[281045]: 2025-12-02 10:13:35.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:35.536 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:35.537 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:13:36 localhost nova_compute[281045]: 2025-12-02 10:13:36.324 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:36 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:36.627 2 INFO neutron.agent.securitygroups_rpc [None req-3559f9f7-1434-4371-a7c8-42d18644b0ee 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72']#033[00m Dec 2 05:13:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "format": "json"}]: dispatch Dec 2 05:13:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:13:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:13:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 31 op/s Dec 2 05:13:37 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:13:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:13:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:13:37 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:37.971 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:37.973 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:37.976 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:37.977 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf4aa1e-d95c-4ab9-a7ee-7369d230f4ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:38 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:38 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:38 localhost nova_compute[281045]: 2025-12-02 10:13:38.933 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 26 KiB/s wr, 37 op/s Dec 2 05:13:39 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:39.413 2 INFO neutron.agent.securitygroups_rpc [None req-f1fb19ca-fe9a-41d0-b171-21af469f3b04 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72', '4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']#033[00m Dec 2 05:13:39 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:39.969 2 INFO neutron.agent.securitygroups_rpc [None req-1b71adfa-49cf-47f5-a7a4-715d1b19b4b9 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']#033[00m Dec 2 05:13:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5", "force": true, "format": "json"}]: dispatch Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "force": true, "format": "json"}]: dispatch Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:13:40.539 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:13:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 9 op/s Dec 2 05:13:41 localhost nova_compute[281045]: 2025-12-02 10:13:41.364 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:42 localhost neutron_sriov_agent[255428]: 2025-12-02 10:13:42.059 2 INFO neutron.agent.securitygroups_rpc [None req-0f8f5643-0b03-43e9-aad8-6bac530a8f71 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:42 localhost openstack_network_exporter[241816]: ERROR 10:13:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:42 localhost openstack_network_exporter[241816]: ERROR 10:13:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:13:42 localhost openstack_network_exporter[241816]: ERROR 10:13:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:42 localhost openstack_network_exporter[241816]: ERROR 10:13:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:13:42 localhost openstack_network_exporter[241816]: Dec 2 05:13:42 localhost openstack_network_exporter[241816]: ERROR 10:13:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:13:42 localhost openstack_network_exporter[241816]: Dec 2 05:13:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 25 KiB/s wr, 16 op/s Dec 2 05:13:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff", "force": true, "format": "json"}]: dispatch Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "force": true, "format": "json"}]: dispatch Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:43 localhost nova_compute[281045]: 2025-12-02 10:13:43.934 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s Dec 2 05:13:46 localhost nova_compute[281045]: 2025-12-02 10:13:46.404 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd", "force": true, "format": "json"}]: dispatch Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "force": true, "format": "json"}]: dispatch Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s Dec 2 05:13:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:47 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:47 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e222 e222: 6 total, 6 up, 6 in Dec 2 05:13:48 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e223 e223: 6 total, 6 up, 6 in Dec 2 05:13:48 localhost nova_compute[281045]: 2025-12-02 10:13:48.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 71 KiB/s wr, 36 op/s Dec 2 05:13:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578", "force": true, "format": "json"}]: dispatch Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "force": true, "format": "json"}]: dispatch Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 54 KiB/s wr, 25 op/s Dec 2 05:13:51 localhost nova_compute[281045]: 2025-12-02 10:13:51.455 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e224 e224: 6 total, 6 up, 6 in Dec 2 05:13:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:51 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:51 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:13:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:13:53 localhost podman[323633]: 2025-12-02 10:13:53.090279298 +0000 UTC m=+0.091662888 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:13:53 localhost podman[323634]: 2025-12-02 10:13:53.144844484 +0000 UTC m=+0.142946293 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:13:53 localhost podman[323634]: 2025-12-02 10:13:53.156887644 +0000 UTC m=+0.154989453 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:13:53 localhost systemd[1]: tmp-crun.FRORne.mount: Deactivated successfully. Dec 2 05:13:53 localhost podman[323635]: 2025-12-02 10:13:53.203912209 +0000 UTC m=+0.197647204 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:13:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 103 KiB/s wr, 165 op/s Dec 2 05:13:53 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:13:53 localhost podman[323635]: 2025-12-02 10:13:53.245037533 +0000 UTC m=+0.238772518 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:13:53 localhost podman[323641]: 2025-12-02 10:13:53.256890747 +0000 UTC m=+0.243086681 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:13:53 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:13:53 localhost podman[323633]: 2025-12-02 10:13:53.279082639 +0000 UTC m=+0.280466189 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 05:13:53 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:13:53 localhost podman[323641]: 2025-12-02 10:13:53.299236648 +0000 UTC m=+0.285432682 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 2 05:13:53 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:13:53 localhost nova_compute[281045]: 2025-12-02 10:13:53.947 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111", "force": true, "format": "json"}]: dispatch Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "force": true, "format": "json"}]: dispatch Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta' Dec 2 05:13:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 84 KiB/s wr, 134 op/s Dec 2 05:13:56 localhost nova_compute[281045]: 2025-12-02 10:13:56.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e225 e225: 6 total, 6 up, 6 in Dec 2 05:13:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 23 KiB/s wr, 98 op/s Dec 2 05:13:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:afb0a218-82d6-4848-bc26-a77f5d927675, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:afb0a218-82d6-4848-bc26-a77f5d927675, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:13:57 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:57.609+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afb0a218-82d6-4848-bc26-a77f5d927675' of type subvolume Dec 2 05:13:57 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afb0a218-82d6-4848-bc26-a77f5d927675' of type subvolume Dec 2 05:13:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "force": true, "format": "json"}]: dispatch Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675'' moved to trashcan Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:13:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < "" Dec 2 05:13:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e226 e226: 6 total, 6 up, 6 in Dec 2 05:13:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:13:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:13:58 localhost podman[323716]: 2025-12-02 10:13:58.076932592 +0000 UTC m=+0.079949328 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, io.openshift.expose-services=) Dec 2 05:13:58 localhost podman[323716]: 2025-12-02 10:13:58.088517128 +0000 UTC m=+0.091533864 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal) Dec 2 05:13:58 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:13:58 localhost podman[323715]: 2025-12-02 10:13:58.182816314 +0000 UTC m=+0.187371967 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:13:58 localhost podman[323715]: 2025-12-02 10:13:58.219912675 +0000 UTC m=+0.224468338 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:13:58 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:13:58 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e227 e227: 6 total, 6 up, 6 in Dec 2 05:13:58 localhost nova_compute[281045]: 2025-12-02 10:13:58.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s Dec 2 05:14:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:01 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:01 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s Dec 2 05:14:01 localhost nova_compute[281045]: 2025-12-02 10:14:01.496 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 58 KiB/s wr, 150 op/s Dec 2 05:14:03 localhost podman[239757]: time="2025-12-02T10:14:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:14:03 localhost podman[239757]: @ - - [02/Dec/2025:10:14:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:14:03 localhost podman[239757]: @ - - [02/Dec/2025:10:14:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19239 "" "Go-http-client/1.1" Dec 2 05:14:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e228 e228: 6 total, 6 up, 6 in Dec 2 05:14:04 localhost nova_compute[281045]: 2025-12-02 10:14:04.029 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:14:05 localhost podman[323757]: 2025-12-02 10:14:05.074979989 +0000 UTC m=+0.077186163 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible) Dec 2 05:14:05 localhost podman[323757]: 2025-12-02 10:14:05.107431685 +0000 UTC m=+0.109637929 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:14:05 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:14:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 51 KiB/s wr, 130 op/s Dec 2 05:14:05 localhost nova_compute[281045]: 2025-12-02 10:14:05.543 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.542 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.556 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:14:06 localhost nova_compute[281045]: 2025-12-02 10:14:06.558 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:14:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e229 e229: 6 total, 6 up, 6 in Dec 2 05:14:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:14:06 Dec 2 05:14:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:14:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:14:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', 'manila_data', 'backups', 'manila_metadata'] Dec 2 05:14:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:14:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:14:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/793956818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:14:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.021 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:14:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 5.2 KiB/s wr, 99 op/s Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.262 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.265 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11483MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.265 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.266 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:14:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0007099298576214405 of space, bias 4.0, pg target 0.5651041666666666 quantized to 16 (current 16) Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:14:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.380 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e230 e230: 6 total, 6 up, 6 in Dec 2 05:14:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:14:07 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3646158002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.818 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.826 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.841 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.844 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:14:07 localhost nova_compute[281045]: 2025-12-02 10:14:07.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:08 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:08 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:08 localhost nova_compute[281045]: 2025-12-02 10:14:08.845 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:08 localhost nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:08 localhost nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:08 localhost nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:09 localhost nova_compute[281045]: 2025-12-02 10:14:09.033 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s Dec 2 05:14:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:09 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:09 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e231 e231: 6 total, 6 up, 6 in Dec 2 05:14:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.556 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:11 localhost nova_compute[281045]: 2025-12-02 10:14:11.564 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:12 localhost openstack_network_exporter[241816]: ERROR 10:14:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:12 localhost openstack_network_exporter[241816]: ERROR 10:14:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:14:12 localhost openstack_network_exporter[241816]: ERROR 10:14:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:12 localhost openstack_network_exporter[241816]: ERROR 10:14:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:14:12 localhost openstack_network_exporter[241816]: Dec 2 05:14:12 localhost openstack_network_exporter[241816]: ERROR 10:14:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:14:12 localhost openstack_network_exporter[241816]: Dec 2 05:14:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:12 localhost ovn_controller[153778]: 2025-12-02T10:14:12Z|00231|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 2 05:14:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta.tmp' Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta.tmp' to config b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta' Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 182 op/s Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta.tmp' Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta.tmp' to config b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta' Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:13 localhost nova_compute[281045]: 2025-12-02 10:14:13.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:13 localhost nova_compute[281045]: 2025-12-02 10:14:13.555 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:13 localhost nova_compute[281045]: 2025-12-02 10:14:13.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:14:14 localhost nova_compute[281045]: 2025-12-02 10:14:14.036 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 149 op/s Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "tenant_id": "5974c1b38c02486098e277d58b491dac", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume authorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, tenant_id:5974c1b38c02486098e277d58b491dac, vol_name:cephfs) < "" Dec 2 05:14:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} v 0) Dec 2 05:14:16 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:16 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-185695304 with tenant 5974c1b38c02486098e277d58b491dac Dec 2 05:14:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:14:16 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume authorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, tenant_id:5974c1b38c02486098e277d58b491dac, vol_name:cephfs) < "" Dec 2 05:14:16 localhost nova_compute[281045]: 2025-12-02 10:14:16.596 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e232 e232: 6 total, 6 up, 6 in Dec 2 05:14:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta.tmp' Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta.tmp' to config b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta' Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume deauthorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.4 KiB/s wr, 81 op/s Dec 2 05:14:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} v 0) Dec 2 05:14:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} v 0) Dec 2 05:14:17 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume deauthorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume evict, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-185695304, client_metadata.root=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719 Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume evict, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:17.448+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '10335e0e-f484-4bf5-b0cc-29a04393ec4e' of type subvolume Dec 2 05:14:17 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '10335e0e-f484-4bf5-b0cc-29a04393ec4e' of type subvolume Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "force": true, "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e'' moved to trashcan Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta.tmp' Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta.tmp' to config b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta' Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:18 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:18 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch Dec 2 05:14:18 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch Dec 2 05:14:18 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"}]': finished Dec 2 05:14:18 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e233 e233: 6 total, 6 up, 6 in Dec 2 05:14:19 localhost nova_compute[281045]: 2025-12-02 10:14:19.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 157 op/s Dec 2 05:14:19 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e234 e234: 6 total, 6 up, 6 in Dec 2 05:14:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:feb9c965-7b4c-4671-ab34-1817317dacc0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:feb9c965-7b4c-4671-ab34-1817317dacc0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:20 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:20.254+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'feb9c965-7b4c-4671-ab34-1817317dacc0' of type subvolume Dec 2 05:14:20 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'feb9c965-7b4c-4671-ab34-1817317dacc0' of type subvolume Dec 2 05:14:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "force": true, "format": "json"}]: dispatch Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0'' moved to trashcan Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < "" Dec 2 05:14:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e235 e235: 6 total, 6 up, 6 in Dec 2 05:14:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:20 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:20 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:20 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta.tmp' Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta.tmp' to config b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta' Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 4.9 MiB/s wr, 133 op/s Dec 2 05:14:21 localhost nova_compute[281045]: 2025-12-02 10:14:21.630 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 158 KiB/s rd, 3.8 MiB/s wr, 241 op/s Dec 2 05:14:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:235a8d4c-ab29-4d51-b38b-3a594da63103, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:235a8d4c-ab29-4d51-b38b-3a594da63103, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:23 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '235a8d4c-ab29-4d51-b38b-3a594da63103' of type subvolume Dec 2 05:14:23 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:23.510+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '235a8d4c-ab29-4d51-b38b-3a594da63103' of type subvolume Dec 2 05:14:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "force": true, "format": "json"}]: dispatch Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103'' moved to trashcan Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < "" Dec 2 05:14:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e236 e236: 6 total, 6 up, 6 in Dec 2 05:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:14:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:14:24 localhost nova_compute[281045]: 2025-12-02 10:14:24.041 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:24 localhost podman[323822]: 2025-12-02 10:14:24.08909509 +0000 UTC m=+0.084066635 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:14:24 localhost podman[323822]: 2025-12-02 10:14:24.101753419 +0000 UTC m=+0.096724954 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:14:24 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:14:24 localhost podman[323821]: 2025-12-02 10:14:24.143935095 +0000 UTC m=+0.145301076 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:14:24 localhost podman[323829]: 2025-12-02 10:14:24.195097527 +0000 UTC m=+0.185195892 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:14:24 localhost podman[323828]: 2025-12-02 10:14:24.247280721 +0000 UTC m=+0.237126368 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:14:24 localhost podman[323829]: 2025-12-02 10:14:24.252794699 +0000 UTC m=+0.242893084 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:14:24 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:14:24 localhost podman[323821]: 2025-12-02 10:14:24.277591811 +0000 UTC m=+0.278957732 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:14:24 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:14:24 localhost podman[323828]: 2025-12-02 10:14:24.308909864 +0000 UTC m=+0.298755541 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 05:14:24 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:14:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ed816090-7c9e-4964-a11f-502383746c0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ed816090-7c9e-4964-a11f-502383746c0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:24 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:24.484+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ed816090-7c9e-4964-a11f-502383746c0b' of type subvolume Dec 2 05:14:24 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ed816090-7c9e-4964-a11f-502383746c0b' of type subvolume Dec 2 05:14:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "force": true, "format": "json"}]: dispatch Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b'' moved to trashcan Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < "" Dec 2 05:14:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s Dec 2 05:14:25 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e237 e237: 6 total, 6 up, 6 in Dec 2 05:14:26 localhost nova_compute[281045]: 2025-12-02 10:14:26.672 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e238 e238: 6 total, 6 up, 6 in Dec 2 05:14:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta.tmp' Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta.tmp' to config b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta' Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s Dec 2 05:14:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:27 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:27.666+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '590dca3f-4f85-48ff-a801-1b49410a7fa1' of type subvolume Dec 2 05:14:27 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '590dca3f-4f85-48ff-a801-1b49410a7fa1' of type subvolume Dec 2 05:14:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "force": true, "format": "json"}]: dispatch Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1'' moved to trashcan Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < "" Dec 2 05:14:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e239 e239: 6 total, 6 up, 6 in Dec 2 05:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:14:29 localhost nova_compute[281045]: 2025-12-02 10:14:29.043 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:29 localhost podman[323904]: 2025-12-02 10:14:29.080147586 +0000 UTC m=+0.086132177 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:14:29 localhost podman[323904]: 2025-12-02 10:14:29.087992447 +0000 UTC m=+0.093977018 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:14:29 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:14:29 localhost podman[323905]: 2025-12-02 10:14:29.130962398 +0000 UTC m=+0.133969558 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.) Dec 2 05:14:29 localhost podman[323905]: 2025-12-02 10:14:29.143492822 +0000 UTC m=+0.146499962 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible) Dec 2 05:14:29 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:14:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 95 KiB/s wr, 107 op/s Dec 2 05:14:29 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e240 e240: 6 total, 6 up, 6 in Dec 2 05:14:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:30 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:30.250+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9ec3f6d-7d6e-4cd3-a305-e37d986864dd' of type subvolume Dec 2 05:14:30 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9ec3f6d-7d6e-4cd3-a305-e37d986864dd' of type subvolume Dec 2 05:14:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "force": true, "format": "json"}]: dispatch Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd'' moved to trashcan Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < "" Dec 2 05:14:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 95 KiB/s wr, 107 op/s Dec 2 05:14:31 localhost nova_compute[281045]: 2025-12-02 10:14:31.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e241 e241: 6 total, 6 up, 6 in Dec 2 05:14:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 140 KiB/s wr, 196 op/s Dec 2 05:14:33 localhost podman[239757]: time="2025-12-02T10:14:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:14:33 localhost podman[239757]: @ - - [02/Dec/2025:10:14:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:14:33 localhost podman[239757]: @ - - [02/Dec/2025:10:14:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19243 "" "Go-http-client/1.1" Dec 2 05:14:33 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e242 e242: 6 total, 6 up, 6 in Dec 2 05:14:34 localhost nova_compute[281045]: 2025-12-02 10:14:34.045 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:14:34 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:14:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:14:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:14:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:14:34 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:14:34 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:14:34 localhost ceph-mgr[287188]: [progress INFO root] Completed event e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:14:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:14:34 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:14:34 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:14:34 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:14:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 54 KiB/s wr, 99 op/s Dec 2 05:14:35 localhost nova_compute[281045]: 2025-12-02 10:14:35.767 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:35.767 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:14:35 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:35.769 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:14:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e243 e243: 6 total, 6 up, 6 in Dec 2 05:14:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:14:36 localhost podman[324029]: 2025-12-02 10:14:36.084056271 +0000 UTC m=+0.086960853 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:14:36 localhost podman[324029]: 2025-12-02 10:14:36.122895055 +0000 UTC m=+0.125799627 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd) Dec 2 05:14:36 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:14:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta.tmp' Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta.tmp' to config b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta' Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:36 localhost nova_compute[281045]: 2025-12-02 10:14:36.705 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e244 e244: 6 total, 6 up, 6 in Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:14:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:14:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 61 KiB/s wr, 112 op/s Dec 2 05:14:37 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:14:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:14:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:14:39 localhost nova_compute[281045]: 2025-12-02 10:14:39.050 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 28 KiB/s wr, 94 op/s Dec 2 05:14:39 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:14:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Dec 2 05:14:39 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:39 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve49 with tenant 8f75117f8554499b9fbaa9c9062eeeef Dec 2 05:14:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:14:39 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:40 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:14:40.771 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:14:40 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:14:40.788 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:14:40Z, description=, device_id=3692a4cb-56a0-4a89-90aa-c2a2654d3e13, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3068ca06-aff6-4755-8b24-5457386fd1c7, ip_allocation=immediate, mac_address=fa:16:3e:58:d3:5a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3470, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:14:40Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:14:41 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:14:41 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:14:41 localhost podman[324065]: 2025-12-02 10:14:41.012848257 +0000 UTC m=+0.058825779 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:14:41 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:14:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 23 KiB/s wr, 77 op/s Dec 2 05:14:41 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:14:41.266 262347 INFO neutron.agent.dhcp.agent [None req-b8ea98ec-7624-454d-b0e2-63f13f9c0a06 - - - - - -] DHCP configuration for ports {'3068ca06-aff6-4755-8b24-5457386fd1c7'} is completed#033[00m Dec 2 05:14:41 localhost nova_compute[281045]: 2025-12-02 10:14:41.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost nova_compute[281045]: 2025-12-02 10:14:41.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 e245: 6 total, 6 up, 6 in Dec 2 05:14:42 localhost openstack_network_exporter[241816]: ERROR 10:14:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:42 localhost openstack_network_exporter[241816]: ERROR 10:14:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:14:42 localhost openstack_network_exporter[241816]: ERROR 10:14:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:42 localhost openstack_network_exporter[241816]: ERROR 10:14:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:14:42 localhost openstack_network_exporter[241816]: Dec 2 05:14:42 localhost openstack_network_exporter[241816]: ERROR 10:14:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:14:42 localhost openstack_network_exporter[241816]: Dec 2 05:14:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:14:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:43 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Dec 2 05:14:43 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve48 with tenant 8f75117f8554499b9fbaa9c9062eeeef Dec 2 05:14:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 64 KiB/s wr, 80 op/s Dec 2 05:14:43 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:14:43 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:44 localhost nova_compute[281045]: 2025-12-02 10:14:44.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:44 localhost nova_compute[281045]: 2025-12-02 10:14:44.899 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 55 KiB/s wr, 69 op/s Dec 2 05:14:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta' Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:46 localhost nova_compute[281045]: 2025-12-02 10:14:46.756 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Dec 2 05:14:46 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Dec 2 05:14:46 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 2 05:14:46 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 2 05:14:46 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:46 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 2 05:14:46 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1 Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:14:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 47 KiB/s wr, 58 op/s Dec 2 05:14:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:48 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta.tmp' Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta.tmp' to config b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta' Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:48 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:48 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:49 localhost nova_compute[281045]: 2025-12-02 10:14:49.059 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s Dec 2 05:14:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "format": "json"}]: dispatch Dec 2 05:14:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:14:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:14:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Dec 2 05:14:50 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve47 with tenant 8f75117f8554499b9fbaa9c9062eeeef Dec 2 05:14:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:14:50 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.146596) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490146658, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2296, "num_deletes": 265, "total_data_size": 3071162, "memory_usage": 3119360, "flush_reason": "Manual Compaction"} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490159433, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1997238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28897, "largest_seqno": 31188, "table_properties": {"data_size": 1987914, "index_size": 5705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22490, "raw_average_key_size": 22, "raw_value_size": 1968271, "raw_average_value_size": 1948, "num_data_blocks": 246, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670381, "oldest_key_time": 1764670381, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 12921 microseconds, and 5792 cpu microseconds. Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.159519) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1997238 bytes OK Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.159548) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161721) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161742) EVENT_LOG_v1 {"time_micros": 1764670490161736, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3060237, prev total WAL file size 3060237, number of live WAL files 2. Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.162746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1950KB)], [45(17MB)] Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490162798, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 20452090, "oldest_snapshot_seqno": -1} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13999 keys, 18904073 bytes, temperature: kUnknown Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490265357, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18904073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18821563, "index_size": 46441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 374725, "raw_average_key_size": 26, "raw_value_size": 18580956, "raw_average_value_size": 1327, "num_data_blocks": 1750, "num_entries": 13999, "num_filter_entries": 13999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.266041) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18904073 bytes Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.268239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.2 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 17.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(19.7) write-amplify(9.5) OK, records in: 14542, records dropped: 543 output_compression: NoCompression Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.268268) EVENT_LOG_v1 {"time_micros": 1764670490268256, "job": 26, "event": "compaction_finished", "compaction_time_micros": 102690, "compaction_time_cpu_micros": 50012, "output_level": 6, "num_output_files": 1, "total_output_size": 18904073, "num_input_records": 14542, "num_output_records": 13999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490269012, "job": 26, "event": "table_file_deletion", "file_number": 47} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490271871, "job": 26, "event": "table_file_deletion", "file_number": 45} Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.162666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < "" Dec 2 05:14:50 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s Dec 2 05:14:51 localhost nova_compute[281045]: 2025-12-02 10:14:51.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:52.050+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25a2b72d-c9e0-4927-869c-054b3b3fd314' of type subvolume Dec 2 05:14:52 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25a2b72d-c9e0-4927-869c-054b3b3fd314' of type subvolume Dec 2 05:14:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "force": true, "format": "json"}]: dispatch Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314'' moved to trashcan Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta.tmp' Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta.tmp' to config b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta' Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:14:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 52 op/s Dec 2 05:14:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta.tmp' Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta.tmp' to config b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta' Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Dec 2 05:14:53 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Dec 2 05:14:53 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 2 05:14:53 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e246 e246: 6 total, 6 up, 6 in Dec 2 05:14:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:53 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 2 05:14:53 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 2 05:14:53 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1 Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:14:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:54 localhost nova_compute[281045]: 2025-12-02 10:14:54.093 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:54 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Dec 2 05:14:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e247 e247: 6 total, 6 up, 6 in Dec 2 05:14:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:14:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:14:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:14:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:14:55 localhost podman[324088]: 2025-12-02 10:14:55.107237842 +0000 UTC m=+0.101984115 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:14:55 localhost podman[324088]: 2025-12-02 10:14:55.140083301 +0000 UTC m=+0.134829604 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:14:55 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:14:55 localhost podman[324093]: 2025-12-02 10:14:55.15826779 +0000 UTC m=+0.140524059 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:14:55 localhost podman[324093]: 2025-12-02 10:14:55.243901641 +0000 UTC m=+0.226157940 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:14:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 73 op/s Dec 2 05:14:55 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:14:55 localhost podman[324090]: 2025-12-02 10:14:55.250958408 +0000 UTC m=+0.236879419 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:14:55 localhost podman[324089]: 2025-12-02 10:14:55.316971776 +0000 UTC m=+0.302344651 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:14:55 localhost podman[324090]: 2025-12-02 10:14:55.334043431 +0000 UTC m=+0.319964462 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3) Dec 2 05:14:55 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:14:55 localhost podman[324089]: 2025-12-02 10:14:55.350636841 +0000 UTC m=+0.336009736 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:14:55 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:14:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta.tmp' Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta.tmp' to config b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta' Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:56 localhost systemd[1]: tmp-crun.wosO8T.mount: Deactivated successfully. Dec 2 05:14:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:794bbe46-18dc-46bb-ae82-7c247e68f409, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:794bbe46-18dc-46bb-ae82-7c247e68f409, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:56 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '794bbe46-18dc-46bb-ae82-7c247e68f409' of type subvolume Dec 2 05:14:56 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:56.744+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '794bbe46-18dc-46bb-ae82-7c247e68f409' of type subvolume Dec 2 05:14:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "force": true, "format": "json"}]: dispatch Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:56 localhost nova_compute[281045]: 2025-12-02 10:14:56.809 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409'' moved to trashcan Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 57 op/s Dec 2 05:14:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Dec 2 05:14:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Dec 2 05:14:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1 Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f' of type subvolume Dec 2 05:14:57 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:57.874+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f' of type subvolume Dec 2 05:14:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "force": true, "format": "json"}]: dispatch Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f'' moved to trashcan Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < "" Dec 2 05:14:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 2 05:14:57 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 2 05:14:57 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Dec 2 05:14:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec95b4f7-9427-4d16-81f2-f3cade322496, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec95b4f7-9427-4d16-81f2-f3cade322496, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:14:58 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec95b4f7-9427-4d16-81f2-f3cade322496' of type subvolume Dec 2 05:14:58 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:58.660+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec95b4f7-9427-4d16-81f2-f3cade322496' of type subvolume Dec 2 05:14:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "force": true, "format": "json"}]: dispatch Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496'' moved to trashcan Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:14:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < "" Dec 2 05:14:59 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e248 e248: 6 total, 6 up, 6 in Dec 2 05:14:59 localhost nova_compute[281045]: 2025-12-02 10:14:59.098 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 75 op/s Dec 2 05:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:15:00 localhost systemd[1]: tmp-crun.gWAyce.mount: Deactivated successfully. Dec 2 05:15:00 localhost podman[324171]: 2025-12-02 10:15:00.076065186 +0000 UTC m=+0.078154973 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:15:00 localhost podman[324171]: 2025-12-02 10:15:00.089026274 +0000 UTC m=+0.091116051 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:15:00 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:15:00 localhost podman[324172]: 2025-12-02 10:15:00.142519808 +0000 UTC m=+0.142280363 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc.) Dec 2 05:15:00 localhost podman[324172]: 2025-12-02 10:15:00.185990763 +0000 UTC m=+0.185751278 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:15:00 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:15:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta.tmp' Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta.tmp' to config b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta' Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e249 e249: 6 total, 6 up, 6 in Dec 2 05:15:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 72 op/s Dec 2 05:15:01 localhost nova_compute[281045]: 2025-12-02 10:15:01.842 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:01 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:01.865+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff6ef38f-1e0f-40c8-83c7-811b055e05a4' of type subvolume Dec 2 05:15:01 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff6ef38f-1e0f-40c8-83c7-811b055e05a4' of type subvolume Dec 2 05:15:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "force": true, "format": "json"}]: dispatch Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4'' moved to trashcan Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < "" Dec 2 05:15:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e250 e250: 6 total, 6 up, 6 in Dec 2 05:15:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.3 MiB/s wr, 200 op/s Dec 2 05:15:03 localhost podman[239757]: time="2025-12-02T10:15:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:15:03 localhost podman[239757]: @ - - [02/Dec/2025:10:15:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:15:03 localhost podman[239757]: @ - - [02/Dec/2025:10:15:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19253 "" "Go-http-client/1.1" Dec 2 05:15:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:03.810+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95ddc110-cc3c-4c61-8c87-bf390fb060a5' of type subvolume Dec 2 05:15:03 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95ddc110-cc3c-4c61-8c87-bf390fb060a5' of type subvolume Dec 2 05:15:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "force": true, "format": "json"}]: dispatch Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5'' moved to trashcan Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < "" Dec 2 05:15:04 localhost nova_compute[281045]: 2025-12-02 10:15:04.125 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta.tmp' Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta.tmp' to config b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta' Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:05 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 2 05:15:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 2 05:15:05 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e251 e251: 6 total, 6 up, 6 in Dec 2 05:15:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s Dec 2 05:15:06 localhost nova_compute[281045]: 2025-12-02 10:15:06.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e252 e252: 6 total, 6 up, 6 in Dec 2 05:15:06 localhost nova_compute[281045]: 2025-12-02 10:15:06.872 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:15:06 Dec 2 05:15:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:15:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:15:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'images', 'manila_data', 'volumes', 'vms', 'backups'] Dec 2 05:15:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:15:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:07 localhost podman[324213]: 2025-12-02 10:15:07.097885804 +0000 UTC m=+0.095470264 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:15:07 localhost podman[324213]: 2025-12-02 10:15:07.139003277 +0000 UTC m=+0.136587747 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:15:07 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:15:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta.tmp' Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta.tmp' to config b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta' Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:07 localhost systemd-journald[47679]: Data hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Dec 2 05:15:07 localhost systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 05:15:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 05:15:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029694915549972082 of space, bias 1.0, pg target 0.5929084804811092 quantized to 32 (current 32) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.001483655255443886 of space, bias 1.0, pg target 0.2947528440815187 quantized to 32 (current 32) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32) Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:15:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0010588969151312116 of space, bias 4.0, pg target 0.8414700818909361 quantized to 16 (current 16) Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:15:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:15:07 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:15:07 localhost nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:08 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2291623585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.027 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "force": true, "format": "json"}]: dispatch Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.239 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.241 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11464MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.241 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.300 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.301 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.332 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta.tmp' Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta.tmp' to config b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta' Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:08 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:08 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:08 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:08 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2041560746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.810 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.818 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.842 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.844 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:15:08 localhost nova_compute[281045]: 2025-12-02 10:15:08.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:08 localhost neutron_sriov_agent[255428]: 2025-12-02 10:15:08.863 2 INFO neutron.agent.securitygroups_rpc [None req-e4074800-d361-45b9-b812-e8981daf28f3 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:09 localhost nova_compute[281045]: 2025-12-02 10:15:09.175 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:09 localhost neutron_sriov_agent[255428]: 2025-12-02 10:15:09.250 2 INFO neutron.agent.securitygroups_rpc [None req-ee552935-4da7-44ca-8e38-6eb6181199e8 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 62 KiB/s wr, 80 op/s Dec 2 05:15:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:09 localhost nova_compute[281045]: 2025-12-02 10:15:09.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta.tmp' Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta.tmp' to config b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta' Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:10 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '874ecabd-a028-4aa4-9a5c-9d15f18fe0c8' of type subvolume Dec 2 05:15:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:10.347+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '874ecabd-a028-4aa4-9a5c-9d15f18fe0c8' of type subvolume Dec 2 05:15:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "force": true, "format": "json"}]: dispatch Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8'' moved to trashcan Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < "" Dec 2 05:15:10 localhost nova_compute[281045]: 2025-12-02 10:15:10.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:10 localhost nova_compute[281045]: 2025-12-02 10:15:10.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 47 KiB/s wr, 61 op/s Dec 2 05:15:11 localhost nova_compute[281045]: 2025-12-02 10:15:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:11 localhost nova_compute[281045]: 2025-12-02 10:15:11.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:15:11 localhost nova_compute[281045]: 2025-12-02 10:15:11.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:15:11 localhost nova_compute[281045]: 2025-12-02 10:15:11.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:15:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 2 05:15:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 2 05:15:11 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 e253: 6 total, 6 up, 6 in Dec 2 05:15:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:11 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:11 localhost nova_compute[281045]: 2025-12-02 10:15:11.914 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:11 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:12 localhost openstack_network_exporter[241816]: ERROR 10:15:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:15:12 localhost openstack_network_exporter[241816]: ERROR 10:15:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:12 localhost openstack_network_exporter[241816]: ERROR 10:15:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:12 localhost openstack_network_exporter[241816]: ERROR 10:15:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:15:12 localhost openstack_network_exporter[241816]: Dec 2 05:15:12 localhost openstack_network_exporter[241816]: ERROR 10:15:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:15:12 localhost openstack_network_exporter[241816]: Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.327 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.328 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.343 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.406 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.406 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.411 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.411 281049 INFO nova.compute.claims [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Claim successful on node np0005541914.localdomain#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.513 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:12 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/703101738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.963 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.969 281049 DEBUG nova.compute.provider_tree [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:15:12 localhost nova_compute[281045]: 2025-12-02 10:15:12.985 281049 DEBUG nova.scheduler.client.report [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.006 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.006 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.058 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.059 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.071 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.093 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.186 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.190 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.191 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating image(s)#033[00m Dec 2 05:15:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.239 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 138 KiB/s wr, 67 op/s Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta.tmp' Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta.tmp' to config b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta' Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.288 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.333 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.339 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.362 281049 DEBUG nova.policy [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e5c738ba752455b908099b234a743a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd858413a9b01463f96545916d2abe5ab', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.416 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.417 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.418 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.419 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.457 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.463 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:13 localhost nova_compute[281045]: 2025-12-02 10:15:13.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta.tmp' Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta.tmp' to config b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta' Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:13 localhost neutron_sriov_agent[255428]: 2025-12-02 10:15:13.701 2 INFO neutron.agent.securitygroups_rpc [req-a541e13d-87f6-4580-832f-af5d7aef99a4 req-c15ffc3e-ba6d-409e-8103-3b4ea0d7e66e 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.035 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.086 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Successfully created port: 5312b3e8-70f6-4e16-95ba-31b46130d41f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.154 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] resizing rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.210 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.320 281049 DEBUG nova.objects.instance [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'migration_context' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.407 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.407 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ensure instance console log exists: /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.408 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.408 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.409 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:15:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "force": true, "format": "json"}]: dispatch Dec 2 05:15:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 2 05:15:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.867 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Successfully updated port: 5312b3e8-70f6-4e16-95ba-31b46130d41f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.886 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.887 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.887 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:15:14 localhost nova_compute[281045]: 2025-12-02 10:15:14.980 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:15:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.144 281049 DEBUG nova.compute.manager [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.145 281049 DEBUG nova.compute.manager [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing instance network info cache due to event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.146 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.216 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.238 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.239 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance network_info: |[{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.239 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.240 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.245 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start _get_guest_xml network_info=[{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.252 281049 WARNING nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:15:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 131 KiB/s wr, 63 op/s Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.261 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.261 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.264 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.264 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.265 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.265 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:15:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:15 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.266 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.266 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:15:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:15 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.268 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.268 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.274 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3 Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd194b0f5-d0ac-4694-aaca-c67668af8e04' of type subvolume Dec 2 05:15:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:15.593+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd194b0f5-d0ac-4694-aaca-c67668af8e04' of type subvolume Dec 2 05:15:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "force": true, "format": "json"}]: dispatch Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04'' moved to trashcan Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < "" Dec 2 05:15:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:15 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2486108379' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.743 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.784 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.789 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.815 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updated VIF entry in instance network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.816 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:15:15 localhost nova_compute[281045]: 2025-12-02 10:15:15.839 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:15:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:15 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:15 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:16 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3419331710' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.224 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.226 281049 DEBUG nova.virt.libvirt.vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.227 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.228 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.229 281049 DEBUG nova.objects.instance [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'pci_devices' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.248 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] End _get_guest_xml xml= Dec 2 05:15:16 localhost nova_compute[281045]: e4135ac9-548a-4e8d-99d6-cde8dedb2c77 Dec 2 05:15:16 localhost nova_compute[281045]: instance-0000000b Dec 2 05:15:16 localhost nova_compute[281045]: 131072 Dec 2 05:15:16 localhost nova_compute[281045]: 1 Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: tempest-VolumesBackupsTest-instance-296444076 Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:15 Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: 128 Dec 2 05:15:16 localhost nova_compute[281045]: 1 Dec 2 05:15:16 localhost nova_compute[281045]: 0 Dec 2 05:15:16 localhost nova_compute[281045]: 0 Dec 2 05:15:16 localhost nova_compute[281045]: 1 Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: tempest-VolumesBackupsTest-479123361-project-member Dec 2 05:15:16 localhost nova_compute[281045]: tempest-VolumesBackupsTest-479123361 Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: RDO Dec 2 05:15:16 localhost nova_compute[281045]: OpenStack Compute Dec 2 05:15:16 localhost nova_compute[281045]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:15:16 localhost nova_compute[281045]: e4135ac9-548a-4e8d-99d6-cde8dedb2c77 Dec 2 05:15:16 localhost nova_compute[281045]: e4135ac9-548a-4e8d-99d6-cde8dedb2c77 Dec 2 05:15:16 localhost nova_compute[281045]: Virtual Machine Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: hvm Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: /dev/urandom Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: Dec 2 05:15:16 localhost nova_compute[281045]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.249 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Preparing to wait for external event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.249 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.250 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.250 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.251 281049 DEBUG nova.virt.libvirt.vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.252 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.252 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.253 281049 DEBUG os_vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.254 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.254 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.255 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.258 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.258 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5312b3e8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.259 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5312b3e8-70, col_values=(('external_ids', {'iface-id': '5312b3e8-70f6-4e16-95ba-31b46130d41f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:0c:21', 'vm-uuid': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.261 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.264 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.267 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.268 281049 INFO os_vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70')#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.319 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.320 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.320 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No VIF found with MAC fa:16:3e:77:0c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.321 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Using config drive#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.362 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.478 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating config drive at /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config#033[00m Dec 2 05:15:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 2 05:15:16 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID Joe with tenant 0fe90f11d3f64e12b3591732792a929e Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.489 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwbjo22t0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:16 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.617 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwbjo22t0" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.662 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.667 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.890 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:16 localhost nova_compute[281045]: 2025-12-02 10:15:16.891 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deleting local config drive /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config because it was imported into RBD.#033[00m Dec 2 05:15:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9911edf-08c0-404a-9b15-1750f599217e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9911edf-08c0-404a-9b15-1750f599217e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:16 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:16.907+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9911edf-08c0-404a-9b15-1750f599217e' of type subvolume Dec 2 05:15:16 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9911edf-08c0-404a-9b15-1750f599217e' of type subvolume Dec 2 05:15:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "force": true, "format": "json"}]: dispatch Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:16 localhost systemd[1]: Started libvirt secret daemon. Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e'' moved to trashcan Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < "" Dec 2 05:15:16 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:17 localhost kernel: device tap5312b3e8-70 entered promiscuous mode Dec 2 05:15:17 localhost NetworkManager[5967]: [1764670517.0304] manager: (tap5312b3e8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/47) Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.032 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost ovn_controller[153778]: 2025-12-02T10:15:17Z|00232|binding|INFO|Claiming lport 5312b3e8-70f6-4e16-95ba-31b46130d41f for this chassis. Dec 2 05:15:17 localhost ovn_controller[153778]: 2025-12-02T10:15:17Z|00233|binding|INFO|5312b3e8-70f6-4e16-95ba-31b46130d41f: Claiming fa:16:3e:77:0c:21 10.100.0.8 Dec 2 05:15:17 localhost systemd-udevd[324615]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.041 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:0c:21 10.100.0.8'], port_security=['fa:16:3e:77:0c:21 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10785715-ddea-43bb-82fa-9f44a2fb1faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=5312b3e8-70f6-4e16-95ba-31b46130d41f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.044 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5312b3e8-70f6-4e16-95ba-31b46130d41f in datapath 8703a229-8c49-443e-95c6-aff62a358434 bound to our chassis#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.046 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8703a229-8c49-443e-95c6-aff62a358434#033[00m Dec 2 05:15:17 localhost ovn_controller[153778]: 2025-12-02T10:15:17Z|00234|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f ovn-installed in OVS Dec 2 05:15:17 localhost ovn_controller[153778]: 2025-12-02T10:15:17Z|00235|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f up in Southbound Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.049 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost NetworkManager[5967]: [1764670517.0583] device (tap5312b3e8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:15:17 localhost NetworkManager[5967]: [1764670517.0590] device (tap5312b3e8-70): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.059 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[088b9412-6ff1-42f6-be6a-b7bde98df4cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.060 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8703a229-81 in ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.063 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8703a229-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.063 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e397fb-e5f7-4bfb-b7c3-25aa1362c42c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.067 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d0380a70-4ad2-44b0-ada1-87bb25759b32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost systemd-machined[202765]: New machine qemu-6-instance-0000000b. Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.080 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[061e2840-54cd-4206-b299-648d29109962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost systemd[1]: Started Virtual Machine qemu-6-instance-0000000b. Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.095 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ab854f98-b0e7-409f-a545-32098d1ad2bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.118 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cb3f93-70b3-4237-a229-29720576a107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.123 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[763903ac-7611-4efb-9840-5e43b3698b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost NetworkManager[5967]: [1764670517.1257] manager: (tap8703a229-80): new Veth device (/org/freedesktop/NetworkManager/Devices/48) Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.150 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[1a91300b-2b57-4cf0-8685-28ee2492e765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.154 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[43b26a3c-478e-4c85-a4b4-792a2c1522d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8703a229-81: link becomes ready Dec 2 05:15:17 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8703a229-80: link becomes ready Dec 2 05:15:17 localhost NetworkManager[5967]: [1764670517.1757] device (tap8703a229-80): carrier: link connected Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.181 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[78da7015-c948-45ad-9fad-ee284fcfe31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.199 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3040bae0-ef4f-4362-895c-e09049ea50d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8703a229-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:75:e2:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268134, 'reachable_time': 27752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324652, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.216 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[096c8182-289a-4fcf-a0f0-bb4f4d413d5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:e26c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1268134, 'tstamp': 1268134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324660, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.232 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9eb788-4d9f-4ade-a3af-6e4f10cb8aab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8703a229-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:75:e2:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268134, 'reachable_time': 27752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324669, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.239 281049 DEBUG nova.compute.manager [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.240 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.241 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.241 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.242 281049 DEBUG nova.compute.manager [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Processing event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Dec 2 05:15:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 111 KiB/s wr, 53 op/s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.263 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[47b2dc7d-4b5c-4616-a59f-822403e6651a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.320 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf0244f-e19f-40fb-9834-1ff358bc56ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.322 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8703a229-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.323 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.323 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8703a229-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.326 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost kernel: device tap8703a229-80 entered promiscuous mode Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.336 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8703a229-80, col_values=(('external_ids', {'iface-id': '37cd0238-9054-48a1-8d6c-4a73284b3493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.337 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost ovn_controller[153778]: 2025-12-02T10:15:17Z|00236|binding|INFO|Releasing lport 37cd0238-9054-48a1-8d6c-4a73284b3493 from this chassis (sb_readonly=0) Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.339 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.340 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9171a0e3-a81d-487e-85ac-8bebbac536c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.341 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: global Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: log /dev/log local0 debug Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: log-tag haproxy-metadata-proxy-8703a229-8c49-443e-95c6-aff62a358434 Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: user root Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: group root Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: maxconn 1024 Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: pidfile /var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: daemon Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: defaults Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: log global Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: mode http Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: option httplog Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: option dontlognull Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: option http-server-close Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: option forwardfor Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: retries 3 Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: timeout http-request 30s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: timeout connect 30s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: timeout client 32s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: timeout server 32s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: timeout http-keep-alive 30s Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: listen listener Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: bind 169.254.169.254:80 Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: http-request add-header X-OVN-Network-ID 8703a229-8c49-443e-95c6-aff62a358434 Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:15:17 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:17.342 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'env', 'PROCESS_TAG=haproxy-8703a229-8c49-443e-95c6-aff62a358434', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8703a229-8c49-443e-95c6-aff62a358434.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.347 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.389 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.390 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Started (Lifecycle Event)#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.393 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.412 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.415 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.419 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.423 281049 INFO nova.virt.libvirt.driver [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance spawned successfully.#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.423 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.436 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.437 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.437 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Paused (Lifecycle Event)#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.449 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.450 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.451 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.452 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.452 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.453 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.460 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.465 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.466 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.483 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.487 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.503 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.508 281049 INFO nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 4.32 seconds to spawn the instance on the hypervisor.#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.509 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:15:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.570 281049 INFO nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 5.18 seconds to build instance.#033[00m Dec 2 05:15:17 localhost nova_compute[281045]: 2025-12-02 10:15:17.586 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 5.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:17 localhost podman[324729]: Dec 2 05:15:17 localhost podman[324729]: 2025-12-02 10:15:17.811407575 +0000 UTC m=+0.094761263 container create 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:17 localhost podman[324729]: 2025-12-02 10:15:17.766064892 +0000 UTC m=+0.049418590 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:15:17 localhost systemd[1]: Started libpod-conmon-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope. Dec 2 05:15:17 localhost systemd[1]: Started libcrun container. Dec 2 05:15:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9251bae03dba350098b1f5dbad067aff0e21633b444c29635f1cf251c0cbf4bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:15:17 localhost podman[324729]: 2025-12-02 10:15:17.921808667 +0000 UTC m=+0.205162325 container init 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:17 localhost podman[324729]: 2025-12-02 10:15:17.935775026 +0000 UTC m=+0.219128684 container start 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:15:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:17 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE] (324747) : New worker (324749) forked Dec 2 05:15:17 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE] (324747) : Loading success. Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta.tmp' Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta.tmp' to config b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta' Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta.tmp' Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta.tmp' to config b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta' Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.178 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.275 281049 DEBUG nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.277 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.278 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.278 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.279 281049 DEBUG nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:15:19 localhost nova_compute[281045]: 2025-12-02 10:15:19.280 281049 WARNING nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received unexpected event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with vm_state active and task_state None.#033[00m Dec 2 05:15:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta.tmp' Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta.tmp' to config b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta' Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta.tmp' Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta.tmp' to config b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta' Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s Dec 2 05:15:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.305 281049 DEBUG nova.compute.manager [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.305 281049 DEBUG nova.compute.manager [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing instance network info cache due to event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.306 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.306 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.307 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6f1c661-5fb8-4466-9254-d282f758f450, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.311 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6f1c661-5fb8-4466-9254-d282f758f450, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:21 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:21.317+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6f1c661-5fb8-4466-9254-d282f758f450' of type subvolume Dec 2 05:15:21 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6f1c661-5fb8-4466-9254-d282f758f450' of type subvolume Dec 2 05:15:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "force": true, "format": "json"}]: dispatch Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450'' moved to trashcan Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < "" Dec 2 05:15:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:21 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:21 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:21 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.753 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updated VIF entry in instance network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.754 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:15:21 localhost nova_compute[281045]: 2025-12-02 10:15:21.771 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:15:22 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 2 05:15:23 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:23 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:23.162+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Dec 2 05:15:23 localhost ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Dec 2 05:15:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s Dec 2 05:15:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:23 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:23.645+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489' of type subvolume Dec 2 05:15:23 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489' of type subvolume Dec 2 05:15:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "force": true, "format": "json"}]: dispatch Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489'' moved to trashcan Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:24 localhost nova_compute[281045]: 2025-12-02 10:15:24.181 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta.tmp' Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta.tmp' to config b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta' Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:24 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:24 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:24 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9 Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:24 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4b34f061-715a-44a3-9eab-41d055e085ea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4b34f061-715a-44a3-9eab-41d055e085ea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:25 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:25.064+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b34f061-715a-44a3-9eab-41d055e085ea' of type subvolume Dec 2 05:15:25 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b34f061-715a-44a3-9eab-41d055e085ea' of type subvolume Dec 2 05:15:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "force": true, "format": "json"}]: dispatch Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea'' moved to trashcan Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < "" Dec 2 05:15:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:25 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:25 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s Dec 2 05:15:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:15:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:15:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:15:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:15:26 localhost systemd[1]: tmp-crun.W1rAeg.mount: Deactivated successfully. Dec 2 05:15:26 localhost podman[324767]: 2025-12-02 10:15:26.097084884 +0000 UTC m=+0.083443855 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 2 05:15:26 localhost podman[324760]: 2025-12-02 10:15:26.067892326 +0000 UTC m=+0.066323548 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:15:26 localhost podman[324760]: 2025-12-02 10:15:26.151883627 +0000 UTC m=+0.150314829 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:15:26 localhost podman[324767]: 2025-12-02 10:15:26.168954922 +0000 UTC m=+0.155313923 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 2 05:15:26 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:15:26 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:15:26 localhost podman[324759]: 2025-12-02 10:15:26.249300821 +0000 UTC m=+0.243023939 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 2 05:15:26 localhost podman[324759]: 2025-12-02 10:15:26.279295913 +0000 UTC m=+0.273019021 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:15:26 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:15:26 localhost podman[324761]: 2025-12-02 10:15:26.299343738 +0000 UTC m=+0.292450896 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute) Dec 2 05:15:26 localhost podman[324761]: 2025-12-02 10:15:26.306838139 +0000 UTC m=+0.299945297 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125) Dec 2 05:15:26 localhost nova_compute[281045]: 2025-12-02 10:15:26.313 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:26 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:15:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} v 0) Dec 2 05:15:26 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:26 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-2071519372 with tenant 3212fac1e026474b9022ee93e4d925a9 Dec 2 05:15:26 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:26 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta' Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c", "force": true, "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "force": true, "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost systemd[1]: tmp-crun.pRo0PA.mount: Deactivated successfully. Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s Dec 2 05:15:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta.tmp' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta.tmp' to config b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:816532a3-40a4-4c5f-a808-14898d84932f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:816532a3-40a4-4c5f-a808-14898d84932f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:27.960+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '816532a3-40a4-4c5f-a808-14898d84932f' of type subvolume Dec 2 05:15:27 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '816532a3-40a4-4c5f-a808-14898d84932f' of type subvolume Dec 2 05:15:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "force": true, "format": "json"}]: dispatch Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f'' moved to trashcan Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < "" Dec 2 05:15:29 localhost nova_compute[281045]: 2025-12-02 10:15:29.186 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 117 op/s Dec 2 05:15:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e' Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "format": "json"}]: dispatch Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:334c2711-d0f6-419e-922d-408205cc4ec2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:334c2711-d0f6-419e-922d-408205cc4ec2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:30 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:30.210+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334c2711-d0f6-419e-922d-408205cc4ec2' of type subvolume Dec 2 05:15:30 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334c2711-d0f6-419e-922d-408205cc4ec2' of type subvolume Dec 2 05:15:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "force": true, "format": "json"}]: dispatch Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2'' moved to trashcan Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < "" Dec 2 05:15:30 localhost ovn_controller[153778]: 2025-12-02T10:15:30Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:0c:21 10.100.0.8 Dec 2 05:15:30 localhost ovn_controller[153778]: 2025-12-02T10:15:30Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:0c:21 10.100.0.8 Dec 2 05:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:15:31 localhost podman[324841]: 2025-12-02 10:15:31.104767513 +0000 UTC m=+0.097119904 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:15:31 localhost podman[324841]: 2025-12-02 10:15:31.115240595 +0000 UTC m=+0.107592976 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:15:31 localhost podman[324842]: 2025-12-02 10:15:31.154210643 +0000 UTC m=+0.143869712 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:15:31 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:15:31 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:31 localhost podman[324842]: 2025-12-02 10:15:31.196038098 +0000 UTC m=+0.185697167 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible) Dec 2 05:15:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:31 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:31 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:15:31 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:31 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 150 KiB/s wr, 73 op/s Dec 2 05:15:31 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:31 localhost nova_compute[281045]: 2025-12-02 10:15:31.317 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:32 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:32 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e254 e254: 6 total, 6 up, 6 in Dec 2 05:15:32 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch Dec 2 05:15:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} v 0) Dec 2 05:15:32 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} v 0) Dec 2 05:15:32 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2071519372, client_metadata.root=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s Dec 2 05:15:33 localhost podman[239757]: time="2025-12-02T10:15:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:15:33 localhost podman[239757]: @ - - [02/Dec/2025:10:15:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157932 "" "Go-http-client/1.1" Dec 2 05:15:33 localhost podman[239757]: @ - - [02/Dec/2025:10:15:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19736 "" "Go-http-client/1.1" Dec 2 05:15:33 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:33 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch Dec 2 05:15:33 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch Dec 2 05:15:33 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"}]': finished Dec 2 05:15:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306", "force": true, "format": "json"}]: dispatch Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta' Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "force": true, "format": "json"}]: dispatch Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta' Dec 2 05:15:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:34 localhost nova_compute[281045]: 2025-12-02 10:15:34.189 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:34 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:34 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49 Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:34.692+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2' of type subvolume Dec 2 05:15:34 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2' of type subvolume Dec 2 05:15:34 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "force": true, "format": "json"}]: dispatch Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2'' moved to trashcan Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:34 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < "" Dec 2 05:15:35 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:35 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:35 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:35 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s Dec 2 05:15:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:15:35 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:15:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:15:35 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:15:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:15:35 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:15:35 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:15:35 localhost ceph-mgr[287188]: [progress INFO root] Completed event 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:15:35 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:15:35 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:15:36 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:15:36 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:15:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 2 05:15:36 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Dec 2 05:15:36 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378 Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.319 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.538 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.539 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.558 281049 DEBUG nova.objects.instance [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.610 281049 INFO nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ignoring supplied device name: /dev/vdb#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.628 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name..do_reserve" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.751 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.752 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.752 281049 INFO nova.compute.manager [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attaching volume eb88c64a-7c29-421c-91ad-190ba7bbf450 to /dev/vdb#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.846 281049 DEBUG os_brick.utils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.108', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005541914.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m Dec 2 05:15:36 localhost nova_compute[281045]: 2025-12-02 10:15:36.848 281049 INFO oslo.privsep.daemon [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpi9k9djog/privsep.sock']#033[00m Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:37 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:37.034+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb9dc736-c0fd-42af-8ddc-944e8a1e50c5' of type subvolume Dec 2 05:15:37 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb9dc736-c0fd-42af-8ddc-944e8a1e50c5' of type subvolume Dec 2 05:15:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "force": true, "format": "json"}]: dispatch Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:15:37 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 2 05:15:37 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 2 05:15:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5'' moved to trashcan Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < "" Dec 2 05:15:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s Dec 2 05:15:37 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:15:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:15:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.621 281049 INFO oslo.privsep.daemon [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.492 324975 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.499 324975 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.503 324975 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.503 324975 INFO oslo.privsep.daemon [-] privsep daemon running as pid 324975#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.626 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[64523c19-068d-4c99-aadf-d61d198833e3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.710 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.717 324975 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.717 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[093827e1-c48a-4967-8f1e-e9bba1d767a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.720 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.730 324975 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.731 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[f27ab309-393d-4a90-9904-1b87bd4cf04e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:cd5f4359d661\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.735 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:37 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.747 324975 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.748 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[be7a37ae-4776-4eb9-a2a5-508553273701]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.752 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[53098531-2397-41de-ad94-b5fc854dbafe]: (4, '64aa5208-7bf7-490c-857b-3c1a3cae8bb3') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.753 281049 DEBUG oslo_concurrency.processutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.782 281049 DEBUG oslo_concurrency.processutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.786 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.788 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.788 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:64aa5208-7bf7-490c-857b-3c1a3cae8bb3 dsc: get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.789 281049 DEBUG os_brick.utils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] <== get_connector_properties: return (942ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.108', 'host': 'np0005541914.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:cd5f4359d661', 'do_local_attach': False, 'nvme_hostid': '64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'system uuid': '64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m Dec 2 05:15:37 localhost nova_compute[281045]: 2025-12-02 10:15:37.790 281049 DEBUG nova.virt.block_device [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating existing volume attachment record: df45d2d8-6af6-424c-887e-0dc643e47ee7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m Dec 2 05:15:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e255 e255: 6 total, 6 up, 6 in Dec 2 05:15:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:15:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:15:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:38 localhost systemd[1]: tmp-crun.KSKIXB.mount: Deactivated successfully. Dec 2 05:15:38 localhost podman[324984]: 2025-12-02 10:15:38.102347235 +0000 UTC m=+0.101426987 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:15:38 localhost podman[324984]: 2025-12-02 10:15:38.140942751 +0000 UTC m=+0.140022483 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible) Dec 2 05:15:38 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.479 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.480 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.481 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.501 281049 DEBUG nova.objects.instance [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.524 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to attach volume eb88c64a-7c29-421c-91ad-190ba7bbf450 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.528 281049 DEBUG nova.virt.libvirt.guest [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] attach device xml: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: eb88c64a-7c29-421c-91ad-190ba7bbf450 Dec 2 05:15:38 localhost nova_compute[281045]: Dec 2 05:15:38 localhost nova_compute[281045]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.685 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.686 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.687 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.688 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No VIF found with MAC fa:16:3e:77:0c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 2 05:15:38 localhost nova_compute[281045]: 2025-12-02 10:15:38.789 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.attach_volume..do_attach_volume" :: held 2.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:39 localhost nova_compute[281045]: 2025-12-02 10:15:39.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 533 KiB/s rd, 3.4 MiB/s wr, 116 op/s Dec 2 05:15:39 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Dec 2 05:15:39 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 2 05:15:39 localhost ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 2 05:15:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:39 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:39.483+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 2 05:15:39 localhost ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 2 05:15:39 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:39 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3360648141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:40 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e256 e256: 6 total, 6 up, 6 in Dec 2 05:15:40 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 2 05:15:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:40.337 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:15:40 localhost nova_compute[281045]: 2025-12-02 10:15:40.338 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:40.339 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:15:40 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:40 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:41 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:41 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:41 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:41 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:41 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:41 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:41 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4 Dec 2 05:15:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e257 e257: 6 total, 6 up, 6 in Dec 2 05:15:41 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:41 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.2 KiB/s rd, 205 KiB/s wr, 15 op/s Dec 2 05:15:41 localhost nova_compute[281045]: 2025-12-02 10:15:41.322 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e258 e258: 6 total, 6 up, 6 in Dec 2 05:15:42 localhost openstack_network_exporter[241816]: ERROR 10:15:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:15:42 localhost openstack_network_exporter[241816]: ERROR 10:15:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:42 localhost openstack_network_exporter[241816]: ERROR 10:15:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:42 localhost openstack_network_exporter[241816]: ERROR 10:15:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:15:42 localhost openstack_network_exporter[241816]: Dec 2 05:15:42 localhost openstack_network_exporter[241816]: ERROR 10:15:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:15:42 localhost openstack_network_exporter[241816]: Dec 2 05:15:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:42 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 2 05:15:42 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID david with tenant 0fe90f11d3f64e12b3591732792a929e Dec 2 05:15:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:42 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < "" Dec 2 05:15:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:42 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/257232119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 367 KiB/s wr, 64 op/s Dec 2 05:15:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e259 e259: 6 total, 6 up, 6 in Dec 2 05:15:44 localhost nova_compute[281045]: 2025-12-02 10:15:44.200 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:44 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:44 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:44 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:44 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:45 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 150 KiB/s wr, 50 op/s Dec 2 05:15:45 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:45.341 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:45 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e260 e260: 6 total, 6 up, 6 in Dec 2 05:15:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta.tmp' Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta.tmp' to config b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta' Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:46 localhost nova_compute[281045]: 2025-12-02 10:15:46.325 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta' Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e261 e261: 6 total, 6 up, 6 in Dec 2 05:15:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 141 KiB/s wr, 47 op/s Dec 2 05:15:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:47 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:47 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4 Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:47 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:15:48.009 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:15:47Z, description=, device_id=151bb500-c512-4a9b-b37e-ab2024450ce8, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4cb12c60-99ec-4340-8fd0-4d72b3c4fbda, ip_allocation=immediate, mac_address=fa:16:3e:75:ea:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3682, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:15:47Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:15:48 localhost podman[325040]: 2025-12-02 10:15:48.239306012 +0000 UTC m=+0.061078698 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:15:48 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:15:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:15:48 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:15:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:15:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:15:48 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:15:48.511 262347 INFO neutron.agent.dhcp.agent [None req-4ee019e8-e0ad-4e8a-a471-c689521992fa - - - - - -] DHCP configuration for ports {'4cb12c60-99ec-4340-8fd0-4d72b3c4fbda'} is completed#033[00m Dec 2 05:15:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:48 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:48 localhost nova_compute[281045]: 2025-12-02 10:15:48.929 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:49 localhost nova_compute[281045]: 2025-12-02 10:15:49.200 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:49 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e262 e262: 6 total, 6 up, 6 in Dec 2 05:15:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 138 KiB/s wr, 75 op/s Dec 2 05:15:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:49 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 2 05:15:49 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:49 localhost ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Dec 2 05:15:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < "" Dec 2 05:15:49 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:49.419+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Dec 2 05:15:49 localhost ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Dec 2 05:15:50 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e263 e263: 6 total, 6 up, 6 in Dec 2 05:15:50 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "format": "json"}]: dispatch Dec 2 05:15:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:51 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:51 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:51 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 121 KiB/s wr, 66 op/s Dec 2 05:15:51 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e264 e264: 6 total, 6 up, 6 in Dec 2 05:15:51 localhost nova_compute[281045]: 2025-12-02 10:15:51.328 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:51 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:51 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:51 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e265 e265: 6 total, 6 up, 6 in Dec 2 05:15:52 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:52 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:52 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:52 localhost nova_compute[281045]: 2025-12-02 10:15:52.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3' Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/dbb45f73-684d-42e6-8bf1-5441b2faf73a Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e266 e266: 6 total, 6 up, 6 in Dec 2 05:15:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 157 KiB/s wr, 139 op/s Dec 2 05:15:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841", "force": true, "format": "json"}]: dispatch Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta' Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:53 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "force": true, "format": "json"}]: dispatch Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta' Dec 2 05:15:53 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.203 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.306 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.307 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.374 281049 INFO nova.compute.manager [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Detaching volume eb88c64a-7c29-421c-91ad-190ba7bbf450#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.430 281049 INFO nova.virt.block_device [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to driver detach volume eb88c64a-7c29-421c-91ad-190ba7bbf450 from mountpoint /dev/vdb#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.442 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Attempting to detach device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.443 281049 DEBUG nova.virt.libvirt.guest [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] detach device xml: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: eb88c64a-7c29-421c-91ad-190ba7bbf450 Dec 2 05:15:54 localhost nova_compute[281045]:
Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.453 281049 INFO nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully detached device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the persistent domain config.#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.454 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.455 281049 DEBUG nova.virt.libvirt.guest [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] detach device xml: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: eb88c64a-7c29-421c-91ad-190ba7bbf450 Dec 2 05:15:54 localhost nova_compute[281045]:
Dec 2 05:15:54 localhost nova_compute[281045]: Dec 2 05:15:54 localhost nova_compute[281045]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.585 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Received event virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.588 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m Dec 2 05:15:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.592 281049 INFO nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully detached device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the live domain config.#033[00m Dec 2 05:15:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:54 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:54 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:15:54 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.817 281049 DEBUG nova.objects.instance [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4 Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:15:54 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:54 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:54 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:54 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:54 localhost nova_compute[281045]: 2025-12-02 10:15:54.904 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.detach_volume..do_detach_volume" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 125 KiB/s wr, 110 op/s Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.634 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.635 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.637 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.638 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.638 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.640 281049 INFO nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Terminating instance#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.641 281049 DEBUG nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 2 05:15:55 localhost kernel: device tap5312b3e8-70 left promiscuous mode Dec 2 05:15:55 localhost NetworkManager[5967]: [1764670555.7054] device (tap5312b3e8-70): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.715 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost ovn_controller[153778]: 2025-12-02T10:15:55Z|00237|binding|INFO|Releasing lport 5312b3e8-70f6-4e16-95ba-31b46130d41f from this chassis (sb_readonly=0) Dec 2 05:15:55 localhost ovn_controller[153778]: 2025-12-02T10:15:55Z|00238|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f down in Southbound Dec 2 05:15:55 localhost ovn_controller[153778]: 2025-12-02T10:15:55Z|00239|binding|INFO|Removing iface tap5312b3e8-70 ovn-installed in OVS Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.719 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:55.758 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:0c:21 10.100.0.8'], port_security=['fa:16:3e:77:0c:21 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10785715-ddea-43bb-82fa-9f44a2fb1faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=5312b3e8-70f6-4e16-95ba-31b46130d41f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:15:55 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:55.760 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5312b3e8-70f6-4e16-95ba-31b46130d41f in datapath 8703a229-8c49-443e-95c6-aff62a358434 unbound from our chassis#033[00m Dec 2 05:15:55 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:55.761 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8703a229-8c49-443e-95c6-aff62a358434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:15:55 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:55.762 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[62bef081-cd80-4f3a-9069-106785ae3b2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:55 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:55.763 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 namespace which is not needed anymore#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.777 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully. Dec 2 05:15:55 localhost systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 14.794s CPU time. Dec 2 05:15:55 localhost systemd-machined[202765]: Machine qemu-6-instance-0000000b terminated. Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.859 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.866 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.878 281049 INFO nova.virt.libvirt.driver [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance destroyed successfully.#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.879 281049 DEBUG nova.objects.instance [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'resources' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.904 281049 DEBUG nova.virt.libvirt.vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=,launch_index=0,launched_at=2025-12-02T10:15:17Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:15:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.905 281049 DEBUG nova.network.os_vif_util [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.906 281049 DEBUG nova.network.os_vif_util [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.907 281049 DEBUG os_vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.910 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.910 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5312b3e8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.915 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:55 localhost nova_compute[281045]: 2025-12-02 10:15:55.919 281049 INFO os_vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70')#033[00m Dec 2 05:15:55 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE] (324747) : haproxy version is 2.8.14-c23fe91 Dec 2 05:15:55 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE] (324747) : path to executable is /usr/sbin/haproxy Dec 2 05:15:55 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [WARNING] (324747) : Exiting Master process... Dec 2 05:15:55 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [ALERT] (324747) : Current worker (324749) exited with code 143 (Terminated) Dec 2 05:15:55 localhost neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [WARNING] (324747) : All workers exited. Exiting... (0) Dec 2 05:15:55 localhost systemd[1]: libpod-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope: Deactivated successfully. Dec 2 05:15:55 localhost podman[325099]: 2025-12-02 10:15:55.963830789 +0000 UTC m=+0.075205272 container died 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:15:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8-userdata-shm.mount: Deactivated successfully. Dec 2 05:15:56 localhost podman[325099]: 2025-12-02 10:15:56.012583327 +0000 UTC m=+0.123957760 container cleanup 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:56 localhost podman[325128]: 2025-12-02 10:15:56.045199819 +0000 UTC m=+0.073680715 container cleanup 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:15:56 localhost systemd[1]: libpod-conmon-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope: Deactivated successfully. Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.065 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.066 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.067 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.068 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.068 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.069 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:15:56 localhost podman[325146]: 2025-12-02 10:15:56.096139884 +0000 UTC m=+0.063767610 container remove 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.101 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b22281ac-564f-4951-ae11-f0b5469348f2]: (4, ('Tue Dec 2 10:15:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 (9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8)\n9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8\nTue Dec 2 10:15:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 (9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8)\n9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.103 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b0660266-e5da-4bf8-9b9a-14de1b6fe3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.104 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8703a229-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:56 localhost kernel: device tap8703a229-80 left promiscuous mode Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.108 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.117 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c402b3d2-5dbc-454d-a484-3b31aae865fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.135 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3df0d-b60c-46b7-8b73-2d14ec923a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.136 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bdbc9e-e8e8-43d7-bcc6-e82aa28bf2fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.150 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6eab08-0018-461d-963b-210d1f43a298]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268128, 'reachable_time': 26781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325163, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.153 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:15:56 localhost ovn_metadata_agent[159477]: 2025-12-02 10:15:56.153 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[274b20e4-555f-43dc-9e60-e32d2b2b2310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:15:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 2 05:15:56 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Dec 2 05:15:56 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7 Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.595 281049 INFO nova.virt.libvirt.driver [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deleting instance files /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_del#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.596 281049 INFO nova.virt.libvirt.driver [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deletion of /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_del complete#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.719 281049 INFO nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.720 281049 DEBUG oslo.service.loopingcall [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.720 281049 DEBUG nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 2 05:15:56 localhost nova_compute[281045]: 2025-12-02 10:15:56.721 281049 DEBUG nova.network.neutron [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 2 05:15:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:15:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:15:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:15:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:15:56 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e267 e267: 6 total, 6 up, 6 in Dec 2 05:15:56 localhost podman[325168]: 2025-12-02 10:15:56.860388857 +0000 UTC m=+0.098816797 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Dec 2 05:15:56 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 2 05:15:56 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:56 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 2 05:15:56 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Dec 2 05:15:56 localhost podman[325166]: 2025-12-02 10:15:56.910385863 +0000 UTC m=+0.151765334 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:15:56 localhost podman[325166]: 2025-12-02 10:15:56.94119263 +0000 UTC m=+0.182572101 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:15:56 localhost systemd[1]: var-lib-containers-storage-overlay-9251bae03dba350098b1f5dbad067aff0e21633b444c29635f1cf251c0cbf4bf-merged.mount: Deactivated successfully. Dec 2 05:15:56 localhost systemd[1]: run-netns-ovnmeta\x2d8703a229\x2d8c49\x2d443e\x2d95c6\x2daff62a358434.mount: Deactivated successfully. Dec 2 05:15:56 localhost podman[325169]: 2025-12-02 10:15:56.959209103 +0000 UTC m=+0.192889767 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:56 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:15:56 localhost podman[325168]: 2025-12-02 10:15:56.976601478 +0000 UTC m=+0.215029388 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:15:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ffd086bb-506f-4c57-a27d-657caefc8485, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ffd086bb-506f-4c57-a27d-657caefc8485, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:56 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:56.984+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffd086bb-506f-4c57-a27d-657caefc8485' of type subvolume Dec 2 05:15:56 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffd086bb-506f-4c57-a27d-657caefc8485' of type subvolume Dec 2 05:15:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "force": true, "format": "json"}]: dispatch Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:56 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485'' moved to trashcan Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < "" Dec 2 05:15:57 localhost podman[325167]: 2025-12-02 10:15:57.056476722 +0000 UTC m=+0.294676525 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:15:57 localhost podman[325167]: 2025-12-02 10:15:57.069005627 +0000 UTC m=+0.307205450 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:15:57 localhost podman[325169]: 2025-12-02 10:15:57.081256884 +0000 UTC m=+0.314937588 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:57 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:15:57 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:15:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 105 KiB/s wr, 93 op/s Dec 2 05:15:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:57 localhost neutron_sriov_agent[255428]: 2025-12-02 10:15:57.569 2 INFO neutron.agent.securitygroups_rpc [req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 req-bbad3521-a7cd-468f-9368-bc82a5a5c437 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e268 e268: 6 total, 6 up, 6 in Dec 2 05:15:57 localhost nova_compute[281045]: 2025-12-02 10:15:57.847 281049 DEBUG nova.network.neutron [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:15:57 localhost nova_compute[281045]: 2025-12-02 10:15:57.862 281049 INFO nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 1.14 seconds to deallocate network for instance.#033[00m Dec 2 05:15:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch Dec 2 05:15:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:15:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:57 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52 Dec 2 05:15:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0) Dec 2 05:15:57 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.043 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.043 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < "" Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.092 281049 DEBUG oslo_concurrency.processutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.117 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.119 281049 WARNING nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received unexpected event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with vm_state deleted and task_state None.#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.119 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-deleted-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:15:58 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:58 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2709406224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.559 281049 DEBUG oslo_concurrency.processutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.565 281049 DEBUG nova.compute.provider_tree [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.630 281049 DEBUG nova.scheduler.client.report [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.768 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.802 281049 INFO nova.scheduler.client.report [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Deleted allocations for instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77#033[00m Dec 2 05:15:58 localhost nova_compute[281045]: 2025-12-02 10:15:58.880 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:58 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:59 localhost nova_compute[281045]: 2025-12-02 10:15:59.207 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 243 KiB/s wr, 156 op/s Dec 2 05:15:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:15:59 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3' of type subvolume Dec 2 05:15:59 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:59.440+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3' of type subvolume Dec 2 05:15:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "force": true, "format": "json"}]: dispatch Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3'' moved to trashcan Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:15:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < "" Dec 2 05:15:59 localhost nova_compute[281045]: 2025-12-02 10:15:59.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:59 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:15:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:15:59 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:15:59 localhost podman[325286]: 2025-12-02 10:15:59.994836398 +0000 UTC m=+0.052151864 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:16:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta.tmp' Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta.tmp' to config b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta' Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:00 localhost nova_compute[281045]: 2025-12-02 10:16:00.952 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 117 KiB/s wr, 56 op/s Dec 2 05:16:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0) Dec 2 05:16:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:16:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0) Dec 2 05:16:01 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4 Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 2 05:16:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:01 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e269 e269: 6 total, 6 up, 6 in Dec 2 05:16:01 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:16:01 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:16:01 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:16:01 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:16:02 localhost podman[325309]: 2025-12-02 10:16:02.086624832 +0000 UTC m=+0.090099580 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:16:02 localhost podman[325309]: 2025-12-02 10:16:02.127846907 +0000 UTC m=+0.131321625 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:16:02 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:16:02 localhost podman[325310]: 2025-12-02 10:16:02.13375516 +0000 UTC m=+0.136114454 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:16:02 localhost podman[325310]: 2025-12-02 10:16:02.214517441 +0000 UTC m=+0.216876735 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:16:02 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:16:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:02 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e' of type subvolume Dec 2 05:16:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:02.768+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e' of type subvolume Dec 2 05:16:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "force": true, "format": "json"}]: dispatch Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e'' moved to trashcan Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < "" Dec 2 05:16:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e270 e270: 6 total, 6 up, 6 in Dec 2 05:16:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:03.185 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:03.185 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 325 KiB/s wr, 113 op/s Dec 2 05:16:03 localhost podman[239757]: time="2025-12-02T10:16:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:16:03 localhost podman[239757]: @ - - [02/Dec/2025:10:16:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:16:03 localhost podman[239757]: @ - - [02/Dec/2025:10:16:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19254 "" "Go-http-client/1.1" Dec 2 05:16:03 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e271 e271: 6 total, 6 up, 6 in Dec 2 05:16:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch Dec 2 05:16:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:718c525a-962f-42e3-9573-9fc3919d4aa7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:718c525a-962f-42e3-9573-9fc3919d4aa7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:03.997+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718c525a-962f-42e3-9573-9fc3919d4aa7' of type subvolume Dec 2 05:16:04 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718c525a-962f-42e3-9573-9fc3919d4aa7' of type subvolume Dec 2 05:16:04 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "force": true, "format": "json"}]: dispatch Dec 2 05:16:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7'' moved to trashcan Dec 2 05:16:04 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:04 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < "" Dec 2 05:16:04 localhost nova_compute[281045]: 2025-12-02 10:16:04.210 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:16:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:16:04 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:16:04 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:16:05 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:05.238+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5aafe356-dc3f-4e86-bea5-6655303e90b0' of type subvolume Dec 2 05:16:05 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5aafe356-dc3f-4e86-bea5-6655303e90b0' of type subvolume Dec 2 05:16:05 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "force": true, "format": "json"}]: dispatch Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0'' moved to trashcan Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:05 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < "" Dec 2 05:16:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 168 KiB/s wr, 38 op/s Dec 2 05:16:05 localhost nova_compute[281045]: 2025-12-02 10:16:05.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:06 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:06.017+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07b7e455-1272-48fc-92f9-fd54c3fafcb0' of type subvolume Dec 2 05:16:06 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07b7e455-1272-48fc-92f9-fd54c3fafcb0' of type subvolume Dec 2 05:16:06 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "force": true, "format": "json"}]: dispatch Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0'' moved to trashcan Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < "" Dec 2 05:16:06 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e272 e272: 6 total, 6 up, 6 in Dec 2 05:16:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:16:06 Dec 2 05:16:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:16:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:16:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['.mgr', 'manila_data', 'backups', 'manila_metadata', 'images', 'vms', 'volumes'] Dec 2 05:16:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta' Dec 2 05:16:07 localhost nova_compute[281045]: 2025-12-02 10:16:07.256 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 185 KiB/s wr, 41 op/s Dec 2 05:16:07 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:16:07 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:16:07 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:16:07 localhost podman[325363]: 2025-12-02 10:16:07.283435152 +0000 UTC m=+0.093781613 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.2968352868218407 quantized to 32 (current 32) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-06 of space, bias 1.0, pg target 0.0005425347222222222 quantized to 32 (current 32) Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:16:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0016815850083752094 of space, bias 4.0, pg target 1.3385416666666667 quantized to 16 (current 16) Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:16:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:16:07 localhost nova_compute[281045]: 2025-12-02 10:16:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:16:08 localhost nova_compute[281045]: 2025-12-02 10:16:08.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:08 localhost nova_compute[281045]: 2025-12-02 10:16:08.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:08 localhost podman[325384]: 2025-12-02 10:16:08.548713108 +0000 UTC m=+0.056996762 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:16:08 localhost podman[325384]: 2025-12-02 10:16:08.559898422 +0000 UTC m=+0.068182096 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:16:08 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.213 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 233 KiB/s wr, 67 op/s Dec 2 05:16:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "format": "json"}]: dispatch Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:09.520+0000 7fd37dd6f640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Dec 2 05:16:09 localhost ceph-mgr[287188]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:16:09 localhost nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:16:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:09.700+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '738f4ca9-41a9-48cc-8ca1-8d9ae9041202' of type subvolume Dec 2 05:16:09 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '738f4ca9-41a9-48cc-8ca1-8d9ae9041202' of type subvolume Dec 2 05:16:09 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "force": true, "format": "json"}]: dispatch Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202'' moved to trashcan Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:09 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < "" Dec 2 05:16:09 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:16:09 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1781133973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.011 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.213 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.214 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11409MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.215 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.215 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.422 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.422 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.445 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:16:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "format": "json"}]: dispatch Dec 2 05:16:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.875 281049 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.877 281049 INFO nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:16:10 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:16:10 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2839302655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.954 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:16:10 localhost nova_compute[281045]: 2025-12-02 10:16:10.963 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:16:11 localhost nova_compute[281045]: 2025-12-02 10:16:11.040 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:11 localhost nova_compute[281045]: 2025-12-02 10:16:11.090 281049 DEBUG nova.compute.manager [None req-924b3d3d-3944-48f4-bb01-134bf874dc0d - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:16:11 localhost nova_compute[281045]: 2025-12-02 10:16:11.114 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:16:11 localhost nova_compute[281045]: 2025-12-02 10:16:11.178 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:16:11 localhost nova_compute[281045]: 2025-12-02 10:16:11.180 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 61 KiB/s wr, 25 op/s Dec 2 05:16:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e273 e273: 6 total, 6 up, 6 in Dec 2 05:16:12 localhost openstack_network_exporter[241816]: ERROR 10:16:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:12 localhost openstack_network_exporter[241816]: ERROR 10:16:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:12 localhost openstack_network_exporter[241816]: ERROR 10:16:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:16:12 localhost openstack_network_exporter[241816]: ERROR 10:16:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:16:12 localhost openstack_network_exporter[241816]: Dec 2 05:16:12 localhost openstack_network_exporter[241816]: ERROR 10:16:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:16:12 localhost openstack_network_exporter[241816]: Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.176 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.176 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.177 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.177 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.260 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:16:12 localhost nova_compute[281045]: 2025-12-02 10:16:12.261 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e274 e274: 6 total, 6 up, 6 in Dec 2 05:16:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 127 KiB/s wr, 38 op/s Dec 2 05:16:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "target_sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, target_sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 65fed077-d23e-47ab-99c4-c249dff46217 for path b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, target_sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd) Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.973+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists Dec 2 05:16:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd) -- by 0 seconds Dec 2 05:16:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' Dec 2 05:16:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta' Dec 2 05:16:14 localhost nova_compute[281045]: 2025-12-02 10:16:14.216 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:14 localhost nova_compute[281045]: 2025-12-02 10:16:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e275 e275: 6 total, 6 up, 6 in Dec 2 05:16:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:15 localhost nova_compute[281045]: 2025-12-02 10:16:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:15 localhost nova_compute[281045]: 2025-12-02 10:16:15.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:16:16 localhost nova_compute[281045]: 2025-12-02 10:16:16.080 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.snap/395e084c-5f31-4d0b-b40b-8a631da3af09/5859e93c-459f-40c4-a0ad-221e72111d9a' to b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/5a7b8174-6a6e-4c82-9582-305f3b6a0931' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking 65fed077-d23e-47ab-99c4-c249dff46217 Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta' Dec 2 05:16:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd) Dec 2 05:16:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s Dec 2 05:16:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:16:17 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:16:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:16:17 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:16:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:18 localhost nova_compute[281045]: 2025-12-02 10:16:18.525 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:19 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta' Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:19 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:19 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:19 localhost nova_compute[281045]: 2025-12-02 10:16:19.223 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 98 KiB/s wr, 57 op/s Dec 2 05:16:21 localhost nova_compute[281045]: 2025-12-02 10:16:21.127 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 49 KiB/s wr, 47 op/s Dec 2 05:16:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta.tmp' Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta.tmp' to config b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta' Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:21 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 e276: 6 total, 6 up, 6 in Dec 2 05:16:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "format": "json"}]: dispatch Dec 2 05:16:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta.tmp' Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta.tmp' to config b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta' Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:23 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:23 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 98 KiB/s wr, 52 op/s Dec 2 05:16:24 localhost nova_compute[281045]: 2025-12-02 10:16:24.225 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s Dec 2 05:16:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta.tmp' Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta.tmp' to config b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta' Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:26 localhost nova_compute[281045]: 2025-12-02 10:16:26.129 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e", "force": true, "format": "json"}]: dispatch Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta' Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "force": true, "format": "json"}]: dispatch Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta' Dec 2 05:16:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:16:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:16:27 localhost systemd[1]: tmp-crun.esm64p.mount: Deactivated successfully. Dec 2 05:16:27 localhost podman[325473]: 2025-12-02 10:16:27.092387117 +0000 UTC m=+0.089957735 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Dec 2 05:16:27 localhost podman[325473]: 2025-12-02 10:16:27.103956292 +0000 UTC m=+0.101526910 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:16:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:16:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:16:27 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:16:27 localhost systemd[1]: tmp-crun.IsyuYk.mount: Deactivated successfully. Dec 2 05:16:27 localhost podman[325474]: 2025-12-02 10:16:27.211586689 +0000 UTC m=+0.202356478 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 05:16:27 localhost podman[325501]: 2025-12-02 10:16:27.218938915 +0000 UTC m=+0.087073836 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:16:27 localhost podman[325501]: 2025-12-02 10:16:27.255524339 +0000 UTC m=+0.123659300 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:16:27 localhost podman[325474]: 2025-12-02 10:16:27.270985645 +0000 UTC m=+0.261756054 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:16:27 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:16:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s Dec 2 05:16:27 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:16:27 localhost podman[325500]: 2025-12-02 10:16:27.260763161 +0000 UTC m=+0.131670947 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:16:27 localhost podman[325500]: 2025-12-02 10:16:27.34602774 +0000 UTC m=+0.216935536 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:16:27 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:16:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8710895b-fa91-4f50-bc6f-341cddce5e76, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8710895b-fa91-4f50-bc6f-341cddce5e76, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:27 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:27.543+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8710895b-fa91-4f50-bc6f-341cddce5e76' of type subvolume Dec 2 05:16:27 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8710895b-fa91-4f50-bc6f-341cddce5e76' of type subvolume Dec 2 05:16:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "force": true, "format": "json"}]: dispatch Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76'' moved to trashcan Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < "" Dec 2 05:16:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta.tmp' Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta.tmp' to config b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta' Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:29 localhost nova_compute[281045]: 2025-12-02 10:16:29.265 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s Dec 2 05:16:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:29 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:29.420+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d3f9f10-b22d-485e-b12b-97dbef75415d' of type subvolume Dec 2 05:16:29 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d3f9f10-b22d-485e-b12b-97dbef75415d' of type subvolume Dec 2 05:16:29 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "force": true, "format": "json"}]: dispatch Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d'' moved to trashcan Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:29 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < "" Dec 2 05:16:31 localhost nova_compute[281045]: 2025-12-02 10:16:31.161 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s Dec 2 05:16:32 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta.tmp' Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta.tmp' to config b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta' Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:32 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:32 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e277 e277: 6 total, 6 up, 6 in Dec 2 05:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:16:33 localhost systemd[1]: tmp-crun.P2QU2D.mount: Deactivated successfully. Dec 2 05:16:33 localhost podman[325558]: 2025-12-02 10:16:33.067918765 +0000 UTC m=+0.070715114 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:16:33 localhost podman[325558]: 2025-12-02 10:16:33.079914293 +0000 UTC m=+0.082710692 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:16:33 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:16:33 localhost podman[325559]: 2025-12-02 10:16:33.115825566 +0000 UTC m=+0.118745109 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git) Dec 2 05:16:33 localhost podman[325559]: 2025-12-02 10:16:33.129715303 +0000 UTC m=+0.132634776 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:16:33 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:16:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s Dec 2 05:16:33 localhost podman[239757]: time="2025-12-02T10:16:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:16:33 localhost podman[239757]: @ - - [02/Dec/2025:10:16:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:16:33 localhost podman[239757]: @ - - [02/Dec/2025:10:16:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19261 "" "Go-http-client/1.1" Dec 2 05:16:34 localhost nova_compute[281045]: 2025-12-02 10:16:34.268 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s Dec 2 05:16:35 localhost sshd[325617]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:16:36 localhost nova_compute[281045]: 2025-12-02 10:16:36.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:997ff79f-92e7-4de5-90ed-58387671be8e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:16:36 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:997ff79f-92e7-4de5-90ed-58387671be8e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:36 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:36.641+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ff79f-92e7-4de5-90ed-58387671be8e' of type subvolume Dec 2 05:16:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:16:36 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:16:36 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ff79f-92e7-4de5-90ed-58387671be8e' of type subvolume Dec 2 05:16:36 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "force": true, "format": "json"}]: dispatch Dec 2 05:16:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:36 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:16:36 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:16:36 localhost ceph-mgr[287188]: [progress INFO root] Completed event e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:16:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:16:36 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e'' moved to trashcan Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < "" Dec 2 05:16:36 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:16:36 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:16:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:16:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s Dec 2 05:16:37 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:16:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:16:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:16:38 localhost ovn_controller[153778]: 2025-12-02T10:16:38Z|00240|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Dec 2 05:16:38 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta.tmp' Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta.tmp' to config b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta' Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:38 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:38 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:16:39 localhost podman[325687]: 2025-12-02 10:16:39.081878334 +0000 UTC m=+0.083030303 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:16:39 localhost podman[325687]: 2025-12-02 10:16:39.097182294 +0000 UTC m=+0.098334313 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 2 05:16:39 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:16:39 localhost nova_compute[281045]: 2025-12-02 10:16:39.269 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s Dec 2 05:16:39 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:39 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd07c53ab-0584-4998-92b9-1d7bd9006b39' of type subvolume Dec 2 05:16:39 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:39.810+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd07c53ab-0584-4998-92b9-1d7bd9006b39' of type subvolume Dec 2 05:16:39 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "force": true, "format": "json"}]: dispatch Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39'' moved to trashcan Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:39 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < "" Dec 2 05:16:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:40.682 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:16:40 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:40.683 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:16:40 localhost nova_compute[281045]: 2025-12-02 10:16:40.683 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:41 localhost nova_compute[281045]: 2025-12-02 10:16:41.246 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s Dec 2 05:16:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 e278: 6 total, 6 up, 6 in Dec 2 05:16:42 localhost openstack_network_exporter[241816]: ERROR 10:16:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:16:42 localhost openstack_network_exporter[241816]: ERROR 10:16:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:42 localhost openstack_network_exporter[241816]: ERROR 10:16:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:42 localhost openstack_network_exporter[241816]: ERROR 10:16:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:16:42 localhost openstack_network_exporter[241816]: Dec 2 05:16:42 localhost openstack_network_exporter[241816]: ERROR 10:16:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:16:42 localhost openstack_network_exporter[241816]: Dec 2 05:16:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d7d186aa-42ff-406e-975a-236ed40d3d49, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d7d186aa-42ff-406e-975a-236ed40d3d49, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:43.018+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7d186aa-42ff-406e-975a-236ed40d3d49' of type subvolume Dec 2 05:16:43 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7d186aa-42ff-406e-975a-236ed40d3d49' of type subvolume Dec 2 05:16:43 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "force": true, "format": "json"}]: dispatch Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49'' moved to trashcan Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:43 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < "" Dec 2 05:16:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s Dec 2 05:16:44 localhost nova_compute[281045]: 2025-12-02 10:16:44.272 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:44 localhost ovn_metadata_agent[159477]: 2025-12-02 10:16:44.685 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:16:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s Dec 2 05:16:45 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:45 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e7db66c-701d-40e6-a69b-fbd4e0d8a416' of type subvolume Dec 2 05:16:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:45.605+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e7db66c-701d-40e6-a69b-fbd4e0d8a416' of type subvolume Dec 2 05:16:45 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "force": true, "format": "json"}]: dispatch Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416'' moved to trashcan Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:45 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < "" Dec 2 05:16:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:46.274+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75b01de1-fd46-4d42-88bc-75b04e569dcb' of type subvolume Dec 2 05:16:46 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75b01de1-fd46-4d42-88bc-75b04e569dcb' of type subvolume Dec 2 05:16:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "force": true, "format": "json"}]: dispatch Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:46 localhost nova_compute[281045]: 2025-12-02 10:16:46.284 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb'' moved to trashcan Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < "" Dec 2 05:16:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s Dec 2 05:16:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:16:48 localhost snmpd[69217]: empty variable list in _query Dec 2 05:16:49 localhost nova_compute[281045]: 2025-12-02 10:16:49.275 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s Dec 2 05:16:50 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch Dec 2 05:16:50 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s Dec 2 05:16:51 localhost nova_compute[281045]: 2025-12-02 10:16:51.323 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.864399) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611864506, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2824, "num_deletes": 263, "total_data_size": 4017438, "memory_usage": 4083848, "flush_reason": "Manual Compaction"} Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611881012, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 2166306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31193, "largest_seqno": 34012, "table_properties": {"data_size": 2156116, "index_size": 6055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27564, "raw_average_key_size": 22, "raw_value_size": 2133473, "raw_average_value_size": 1769, "num_data_blocks": 260, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670490, "oldest_key_time": 1764670490, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 16681 microseconds, and 7111 cpu microseconds. Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881087) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 2166306 bytes OK Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881113) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883606) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883629) EVENT_LOG_v1 {"time_micros": 1764670611883622, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4003923, prev total WAL file size 4004672, number of live WAL files 2. Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.884909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353038' seq:0, type:0; will stop at (end) Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2115KB)], [48(18MB)] Dec 2 05:16:51 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611884995, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 21070379, "oldest_snapshot_seqno": -1} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14715 keys, 19405700 bytes, temperature: kUnknown Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612016290, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 19405700, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19319790, "index_size": 48049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36805, "raw_key_size": 391860, "raw_average_key_size": 26, "raw_value_size": 19068068, "raw_average_value_size": 1295, "num_data_blocks": 1812, "num_entries": 14715, "num_filter_entries": 14715, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.016679) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 19405700 bytes Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.018631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.4 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 18.0 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(18.7) write-amplify(9.0) OK, records in: 15205, records dropped: 490 output_compression: NoCompression Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.018662) EVENT_LOG_v1 {"time_micros": 1764670612018649, "job": 28, "event": "compaction_finished", "compaction_time_micros": 131383, "compaction_time_cpu_micros": 56256, "output_level": 6, "num_output_files": 1, "total_output_size": 19405700, "num_input_records": 15205, "num_output_records": 14715, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612019099, "job": 28, "event": "table_file_deletion", "file_number": 50} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612021901, "job": 28, "event": "table_file_deletion", "file_number": 48} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.884701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.021995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022487) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612022576, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 31616, "memory_usage": 36808, "flush_reason": "Manual Compaction"} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612025529, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 22566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34017, "largest_seqno": 34282, "table_properties": {"data_size": 20734, "index_size": 72, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5042, "raw_average_key_size": 19, "raw_value_size": 17187, "raw_average_value_size": 65, "num_data_blocks": 3, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670611, "oldest_key_time": 1764670611, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 3090 microseconds, and 1139 cpu microseconds. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.025583) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 22566 bytes OK Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.025619) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027625) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027653) EVENT_LOG_v1 {"time_micros": 1764670612027645, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 29535, prev total WAL file size 46465, number of live WAL files 2. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.029628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(22KB)], [51(18MB)] Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612029669, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 19428266, "oldest_snapshot_seqno": -1} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14467 keys, 17868336 bytes, temperature: kUnknown Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612136530, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17868336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17786102, "index_size": 44949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 387148, "raw_average_key_size": 26, "raw_value_size": 17540729, "raw_average_value_size": 1212, "num_data_blocks": 1677, "num_entries": 14467, "num_filter_entries": 14467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.136801) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17868336 bytes Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.139585) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.7 rd, 167.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 18.5 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(1652.8) write-amplify(791.8) OK, records in: 14978, records dropped: 511 output_compression: NoCompression Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.139611) EVENT_LOG_v1 {"time_micros": 1764670612139600, "job": 30, "event": "compaction_finished", "compaction_time_micros": 106914, "compaction_time_cpu_micros": 53842, "output_level": 6, "num_output_files": 1, "total_output_size": 17868336, "num_input_records": 14978, "num_output_records": 14467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612139776, "job": 30, "event": "table_file_deletion", "file_number": 53} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612142260, "job": 30, "event": "table_file_deletion", "file_number": 51} Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.029551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta' Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 894 B/s rd, 101 KiB/s wr, 6 op/s Dec 2 05:16:54 localhost nova_compute[281045]: 2025-12-02 10:16:54.278 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:16:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "force": true, "format": "json"}]: dispatch Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd'' moved to trashcan Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < "" Dec 2 05:16:54 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "format": "json"}]: dispatch Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:54 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s Dec 2 05:16:56 localhost nova_compute[281045]: 2025-12-02 10:16:56.364 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta' Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:56 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:56 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s Dec 2 05:16:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84", "force": true, "format": "json"}]: dispatch Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta' Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "force": true, "format": "json"}]: dispatch Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta' Dec 2 05:16:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:16:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:16:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:16:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:16:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085", "force": true, "format": "json"}]: dispatch Dec 2 05:16:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta' Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "force": true, "format": "json"}]: dispatch Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta' Dec 2 05:16:58 localhost podman[325710]: 2025-12-02 10:16:58.091624543 +0000 UTC m=+0.077991690 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3) Dec 2 05:16:58 localhost podman[325710]: 2025-12-02 10:16:58.104786866 +0000 UTC m=+0.091154003 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 05:16:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:16:58 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:16:58 localhost podman[325709]: 2025-12-02 10:16:58.146901296 +0000 UTC m=+0.134704508 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:16:58 localhost podman[325709]: 2025-12-02 10:16:58.156363187 +0000 UTC m=+0.144166369 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:16:58 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:16:58 localhost podman[325708]: 2025-12-02 10:16:58.107671674 +0000 UTC m=+0.098946602 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible) Dec 2 05:16:58 localhost podman[325708]: 2025-12-02 10:16:58.237217454 +0000 UTC m=+0.228492452 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:16:58 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:16:58 localhost podman[325716]: 2025-12-02 10:16:58.160372959 +0000 UTC m=+0.138870535 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:16:58 localhost podman[325716]: 2025-12-02 10:16:58.294932992 +0000 UTC m=+0.273430578 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:16:58 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:16:59 localhost nova_compute[281045]: 2025-12-02 10:16:59.281 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 102 KiB/s wr, 6 op/s Dec 2 05:16:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "format": "json"}]: dispatch Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:16:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta.tmp' Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta.tmp' to config b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta' Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:16:59 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:16:59 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:17:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:00 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:00.557+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6f9e4a2-dde5-48d3-9ade-72d59a880bf2' of type subvolume Dec 2 05:17:00 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6f9e4a2-dde5-48d3-9ade-72d59a880bf2' of type subvolume Dec 2 05:17:00 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "force": true, "format": "json"}]: dispatch Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2'' moved to trashcan Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:00 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < "" Dec 2 05:17:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2188bdca-035f-45be-90c0-127aae7698b7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2188bdca-035f-45be-90c0-127aae7698b7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:01 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:01.201+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2188bdca-035f-45be-90c0-127aae7698b7' of type subvolume Dec 2 05:17:01 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2188bdca-035f-45be-90c0-127aae7698b7' of type subvolume Dec 2 05:17:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "force": true, "format": "json"}]: dispatch Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7'' moved to trashcan Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < "" Dec 2 05:17:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 59 KiB/s wr, 4 op/s Dec 2 05:17:01 localhost nova_compute[281045]: 2025-12-02 10:17:01.408 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e279 e279: 6 total, 6 up, 6 in Dec 2 05:17:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta.tmp' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta.tmp' to config b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "target_sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, target_sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id a6ca65fd-98b6-4668-9e39-7653f2c50c81 for path b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, target_sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70 Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70) Dec 2 05:17:03 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch Dec 2 05:17:03 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:03 localhost podman[239757]: time="2025-12-02T10:17:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:17:03 localhost podman[239757]: @ - - [02/Dec/2025:10:17:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:17:03 localhost podman[239757]: @ - - [02/Dec/2025:10:17:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19256 "" "Go-http-client/1.1" Dec 2 05:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:17:04 localhost systemd[1]: tmp-crun.nAhIR5.mount: Deactivated successfully. Dec 2 05:17:04 localhost podman[325790]: 2025-12-02 10:17:04.093172915 +0000 UTC m=+0.095522128 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:17:04 localhost podman[325790]: 2025-12-02 10:17:04.10183988 +0000 UTC m=+0.104189093 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:17:04 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:17:04 localhost systemd[1]: tmp-crun.UnlhhQ.mount: Deactivated successfully. Dec 2 05:17:04 localhost podman[325791]: 2025-12-02 10:17:04.185842434 +0000 UTC m=+0.187822976 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:17:04 localhost podman[325791]: 2025-12-02 10:17:04.201914917 +0000 UTC m=+0.203895469 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, version=9.6) Dec 2 05:17:04 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:17:04 localhost nova_compute[281045]: 2025-12-02 10:17:04.285 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s Dec 2 05:17:06 localhost nova_compute[281045]: 2025-12-02 10:17:06.457 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:17:06 Dec 2 05:17:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:17:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:17:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['volumes', 'manila_data', 'backups', 'vms', '.mgr', 'images', 'manila_metadata'] Dec 2 05:17:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:17:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70) -- by 0 seconds Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta' Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:17:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0020270078692071467 of space, bias 4.0, pg target 1.6134982638888886 quantized to 16 (current 16) Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:17:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:17:07 localhost nova_compute[281045]: 2025-12-02 10:17:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:09 localhost nova_compute[281045]: 2025-12-02 10:17:09.287 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s Dec 2 05:17:09 localhost nova_compute[281045]: 2025-12-02 10:17:09.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:09 localhost nova_compute[281045]: 2025-12-02 10:17:09.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:17:10 localhost podman[325833]: 2025-12-02 10:17:10.064712786 +0000 UTC m=+0.072817542 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:17:10 localhost podman[325833]: 2025-12-02 10:17:10.100878314 +0000 UTC m=+0.108983070 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:17:10 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:17:10 localhost nova_compute[281045]: 2025-12-02 10:17:10.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:17:11 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 20K writes, 81K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 20K writes, 7249 syncs, 2.85 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 44.32 MB, 0.07 MB/s#012Interval WAL: 12K writes, 5057 syncs, 2.42 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:17:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.474 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:17:11 localhost nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:17:11 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 e280: 6 total, 6 up, 6 in Dec 2 05:17:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:17:12 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3832367767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.038 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:17:12 localhost openstack_network_exporter[241816]: ERROR 10:17:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:12 localhost openstack_network_exporter[241816]: ERROR 10:17:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:12 localhost openstack_network_exporter[241816]: ERROR 10:17:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:17:12 localhost openstack_network_exporter[241816]: ERROR 10:17:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:17:12 localhost openstack_network_exporter[241816]: Dec 2 05:17:12 localhost openstack_network_exporter[241816]: ERROR 10:17:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:17:12 localhost openstack_network_exporter[241816]: Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.snap/7af3a8b2-5504-4261-9144-956137288f3e/1bbce990-c6c2-412f-a746-3f417e5bfa8d' to b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/c7f20348-1e91-4645-90a4-2dc42aa24452' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:12 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking a6ca65fd-98b6-4668-9e39-7653f2c50c81 Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11386MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70) Dec 2 05:17:12 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8db7e7dd-0638-45f8-969b-0cba743185e7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8db7e7dd-0638-45f8-969b-0cba743185e7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:12.303+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8db7e7dd-0638-45f8-969b-0cba743185e7' of type subvolume Dec 2 05:17:12 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8db7e7dd-0638-45f8-969b-0cba743185e7' of type subvolume Dec 2 05:17:12 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "force": true, "format": "json"}]: dispatch Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7'' moved to trashcan Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:12 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < "" Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.457 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.457 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.477 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:17:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:17:12 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4112839142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.942 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:17:12 localhost nova_compute[281045]: 2025-12-02 10:17:12.947 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:17:13 localhost nova_compute[281045]: 2025-12-02 10:17:13.063 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:17:13 localhost nova_compute[281045]: 2025-12-02 10:17:13.066 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:17:13 localhost nova_compute[281045]: 2025-12-02 10:17:13.066 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.067 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.067 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.068 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.093 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.093 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "format": "json"}]: dispatch Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.292 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta.tmp' Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta.tmp' to config b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta' Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:14 localhost nova_compute[281045]: 2025-12-02 10:17:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:14 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:14 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' Dec 2 05:17:15 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta' Dec 2 05:17:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:15 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch Dec 2 05:17:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:15 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s Dec 2 05:17:15 localhost nova_compute[281045]: 2025-12-02 10:17:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:15 localhost nova_compute[281045]: 2025-12-02 10:17:15.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:17:15 localhost ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.2 total, 600.0 interval#012Cumulative writes: 24K writes, 90K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 24K writes, 8465 syncs, 2.85 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 50K keys, 14K commit groups, 1.0 writes per commit group, ingest: 27.33 MB, 0.05 MB/s#012Interval WAL: 14K writes, 6036 syncs, 2.36 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:17:16 localhost nova_compute[281045]: 2025-12-02 10:17:16.519 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s Dec 2 05:17:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d", "force": true, "format": "json"}]: dispatch Dec 2 05:17:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' Dec 2 05:17:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta' Dec 2 05:17:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "force": true, "format": "json"}]: dispatch Dec 2 05:17:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta' Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:18.263+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee9ab43f-a3e3-4447-9084-6b663c27a445' of type subvolume Dec 2 05:17:18 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee9ab43f-a3e3-4447-9084-6b663c27a445' of type subvolume Dec 2 05:17:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "force": true, "format": "json"}]: dispatch Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445'' moved to trashcan Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "format": "json"}]: dispatch Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:18 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:19 localhost nova_compute[281045]: 2025-12-02 10:17:19.292 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s Dec 2 05:17:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:21.127+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '421f49de-9caa-4e96-8ed7-c70fac5c9582' of type subvolume Dec 2 05:17:21 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '421f49de-9caa-4e96-8ed7-c70fac5c9582' of type subvolume Dec 2 05:17:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "force": true, "format": "json"}]: dispatch Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582'' moved to trashcan Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s Dec 2 05:17:21 localhost nova_compute[281045]: 2025-12-02 10:17:21.556 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta.tmp' Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta.tmp' to config b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta' Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:21 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260", "force": true, "format": "json"}]: dispatch Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta' Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:22 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "force": true, "format": "json"}]: dispatch Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta' Dec 2 05:17:22 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:23 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e281 e281: 6 total, 6 up, 6 in Dec 2 05:17:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s Dec 2 05:17:24 localhost nova_compute[281045]: 2025-12-02 10:17:24.294 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8feab34-30e1-4504-93e1-fee137b334fd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8feab34-30e1-4504-93e1-fee137b334fd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:25 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:25.242+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8feab34-30e1-4504-93e1-fee137b334fd' of type subvolume Dec 2 05:17:25 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8feab34-30e1-4504-93e1-fee137b334fd' of type subvolume Dec 2 05:17:25 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "force": true, "format": "json"}]: dispatch Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd'' moved to trashcan Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:25 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < "" Dec 2 05:17:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s Dec 2 05:17:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:26 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:26.143+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf788ca1-8e50-4d58-9737-f8c6482ff48b' of type subvolume Dec 2 05:17:26 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf788ca1-8e50-4d58-9737-f8c6482ff48b' of type subvolume Dec 2 05:17:26 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "force": true, "format": "json"}]: dispatch Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b'' moved to trashcan Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:26 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < "" Dec 2 05:17:26 localhost nova_compute[281045]: 2025-12-02 10:17:26.608 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s Dec 2 05:17:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e282 e282: 6 total, 6 up, 6 in Dec 2 05:17:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta.tmp' Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta.tmp' to config b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta' Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:28 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:28 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:17:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:17:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:17:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:17:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 150 KiB/s wr, 9 op/s Dec 2 05:17:29 localhost nova_compute[281045]: 2025-12-02 10:17:29.395 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:29 localhost podman[325896]: 2025-12-02 10:17:29.448864835 +0000 UTC m=+0.445367746 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:17:29 localhost podman[325896]: 2025-12-02 10:17:29.454491838 +0000 UTC m=+0.450994809 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:17:29 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:17:29 localhost podman[325895]: 2025-12-02 10:17:29.417538076 +0000 UTC m=+0.417507433 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:17:29 localhost podman[325897]: 2025-12-02 10:17:29.431412331 +0000 UTC m=+0.429253433 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:17:29 localhost podman[325902]: 2025-12-02 10:17:29.489950824 +0000 UTC m=+0.479723370 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:17:29 localhost podman[325897]: 2025-12-02 10:17:29.511544155 +0000 UTC m=+0.509385247 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute) Dec 2 05:17:29 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:17:29 localhost podman[325902]: 2025-12-02 10:17:29.524576415 +0000 UTC m=+0.514348961 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller) Dec 2 05:17:29 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:17:29 localhost podman[325895]: 2025-12-02 10:17:29.55021654 +0000 UTC m=+0.550185897 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:17:29 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:17:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 4 op/s Dec 2 05:17:31 localhost nova_compute[281045]: 2025-12-02 10:17:31.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:31 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e283 e283: 6 total, 6 up, 6 in Dec 2 05:17:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:33 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37ec89f0-b485-493a-a6e2-4d54629ab0d1' of type subvolume Dec 2 05:17:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:33.096+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37ec89f0-b485-493a-a6e2-4d54629ab0d1' of type subvolume Dec 2 05:17:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "force": true, "format": "json"}]: dispatch Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1'' moved to trashcan Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < "" Dec 2 05:17:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s Dec 2 05:17:33 localhost podman[239757]: time="2025-12-02T10:17:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:17:33 localhost podman[239757]: @ - - [02/Dec/2025:10:17:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:17:33 localhost podman[239757]: @ - - [02/Dec/2025:10:17:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19265 "" "Go-http-client/1.1" Dec 2 05:17:34 localhost nova_compute[281045]: 2025-12-02 10:17:34.398 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:17:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:17:35 localhost podman[325976]: 2025-12-02 10:17:35.089892601 +0000 UTC m=+0.089025869 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6) Dec 2 05:17:35 localhost podman[325976]: 2025-12-02 10:17:35.104533069 +0000 UTC m=+0.103666377 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64) Dec 2 05:17:35 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:17:35 localhost podman[325975]: 2025-12-02 10:17:35.203163141 +0000 UTC m=+0.204564839 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:17:35 localhost podman[325975]: 2025-12-02 10:17:35.217027966 +0000 UTC m=+0.218429644 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:17:35 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:17:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s Dec 2 05:17:36 localhost nova_compute[281045]: 2025-12-02 10:17:36.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:36 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 e284: 6 total, 6 up, 6 in Dec 2 05:17:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:17:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 51 KiB/s wr, 2 op/s Dec 2 05:17:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:17:37 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta.tmp' Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta.tmp' to config b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta' Dec 2 05:17:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:17:37 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:17:37 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:37 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:37 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:17:37 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:17:37 localhost ceph-mgr[287188]: [progress INFO root] Completed event f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:17:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:17:37 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:17:37 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:17:37 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:17:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 91 KiB/s wr, 5 op/s Dec 2 05:17:39 localhost nova_compute[281045]: 2025-12-02 10:17:39.400 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:17:41 localhost podman[326103]: 2025-12-02 10:17:41.10179987 +0000 UTC m=+0.105262237 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Dec 2 05:17:41 localhost podman[326103]: 2025-12-02 10:17:41.116001935 +0000 UTC m=+0.119464302 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:17:41 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:17:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 434 B/s rd, 78 KiB/s wr, 4 op/s Dec 2 05:17:41 localhost nova_compute[281045]: 2025-12-02 10:17:41.726 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:42 localhost openstack_network_exporter[241816]: ERROR 10:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:42 localhost openstack_network_exporter[241816]: ERROR 10:17:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:17:42 localhost openstack_network_exporter[241816]: ERROR 10:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:42 localhost openstack_network_exporter[241816]: ERROR 10:17:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:17:42 localhost openstack_network_exporter[241816]: Dec 2 05:17:42 localhost openstack_network_exporter[241816]: ERROR 10:17:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:17:42 localhost openstack_network_exporter[241816]: Dec 2 05:17:42 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:17:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:17:42 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2826f8b0-e859-462c-8596-fb04c439e342, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2826f8b0-e859-462c-8596-fb04c439e342, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:42.450+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2826f8b0-e859-462c-8596-fb04c439e342' of type subvolume Dec 2 05:17:42 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2826f8b0-e859-462c-8596-fb04c439e342' of type subvolume Dec 2 05:17:42 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "force": true, "format": "json"}]: dispatch Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342'' moved to trashcan Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:42 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < "" Dec 2 05:17:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:17:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s Dec 2 05:17:44 localhost nova_compute[281045]: 2025-12-02 10:17:44.403 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s Dec 2 05:17:46 localhost nova_compute[281045]: 2025-12-02 10:17:46.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:46 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:46.956+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b0a592a-aac6-453e-a44c-9563c7dadce2' of type subvolume Dec 2 05:17:46 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b0a592a-aac6-453e-a44c-9563c7dadce2' of type subvolume Dec 2 05:17:46 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "force": true, "format": "json"}]: dispatch Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2'' moved to trashcan Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:17:46 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < "" Dec 2 05:17:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 196 B/s rd, 67 KiB/s wr, 2 op/s Dec 2 05:17:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 78 KiB/s wr, 3 op/s Dec 2 05:17:49 localhost nova_compute[281045]: 2025-12-02 10:17:49.406 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:49 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch Dec 2 05:17:49 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s Dec 2 05:17:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:51.714 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:17:51 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:51.715 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:17:51 localhost nova_compute[281045]: 2025-12-02 10:17:51.742 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:51 localhost nova_compute[281045]: 2025-12-02 10:17:51.775 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:17:52 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch Dec 2 05:17:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:17:52 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:17:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 74 KiB/s wr, 3 op/s Dec 2 05:17:54 localhost nova_compute[281045]: 2025-12-02 10:17:54.410 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s Dec 2 05:17:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta.tmp' Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta.tmp' to config b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta' Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:55 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:55 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:56 localhost nova_compute[281045]: 2025-12-02 10:17:56.813 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s Dec 2 05:17:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:57 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:17:57 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta.tmp' Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta.tmp' to config b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta' Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "new_size": 2147483648, "format": "json"}]: dispatch Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:17:58 localhost ovn_metadata_agent[159477]: 2025-12-02 10:17:58.717 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:17:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 65 KiB/s wr, 3 op/s Dec 2 05:17:59 localhost nova_compute[281045]: 2025-12-02 10:17:59.423 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:17:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:18:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:18:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:18:00 localhost systemd[1]: tmp-crun.BEHGiu.mount: Deactivated successfully. Dec 2 05:18:00 localhost podman[326122]: 2025-12-02 10:18:00.085054225 +0000 UTC m=+0.082641702 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:00 localhost systemd[1]: tmp-crun.IyY5Fb.mount: Deactivated successfully. Dec 2 05:18:00 localhost podman[326122]: 2025-12-02 10:18:00.120322276 +0000 UTC m=+0.117909743 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:18:00 localhost podman[326123]: 2025-12-02 10:18:00.127395522 +0000 UTC m=+0.121348278 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:18:00 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:18:00 localhost podman[326123]: 2025-12-02 10:18:00.137440551 +0000 UTC m=+0.131393347 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:18:00 localhost podman[326135]: 2025-12-02 10:18:00.102897342 +0000 UTC m=+0.084643155 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Dec 2 05:18:00 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:18:00 localhost podman[326135]: 2025-12-02 10:18:00.18802126 +0000 UTC m=+0.169767113 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:00 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:18:00 localhost podman[326125]: 2025-12-02 10:18:00.206850358 +0000 UTC m=+0.194100569 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:18:00 localhost podman[326125]: 2025-12-02 10:18:00.244254143 +0000 UTC m=+0.231504394 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:18:00 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:18:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s Dec 2 05:18:01 localhost nova_compute[281045]: 2025-12-02 10:18:01.851 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:01 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:01.858+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e32efa73-156a-46e5-a7b8-279ab8d48b0b' of type subvolume Dec 2 05:18:01 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e32efa73-156a-46e5-a7b8-279ab8d48b0b' of type subvolume Dec 2 05:18:01 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "force": true, "format": "json"}]: dispatch Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b'' moved to trashcan Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:01 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < "" Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.928661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681928718, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 259, "total_data_size": 1444728, "memory_usage": 1471824, "flush_reason": "Manual Compaction"} Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681937318, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 950051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34284, "largest_seqno": 35570, "table_properties": {"data_size": 944687, "index_size": 2707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13018, "raw_average_key_size": 20, "raw_value_size": 933281, "raw_average_value_size": 1472, "num_data_blocks": 118, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670612, "oldest_key_time": 1764670612, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8677 microseconds, and 2694 cpu microseconds. Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.937349) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 950051 bytes OK Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.937369) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939390) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939405) EVENT_LOG_v1 {"time_micros": 1764670681939401, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939432) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1438285, prev total WAL file size 1438609, number of live WAL files 2. Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.940109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353233' seq:72057594037927935, type:22 .. '6C6F676D0034373735' seq:0, type:0; will stop at (end) Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(927KB)], [54(17MB)] Dec 2 05:18:01 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681940198, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 18818387, "oldest_snapshot_seqno": -1} Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14562 keys, 18691694 bytes, temperature: kUnknown Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682040939, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 18691694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18606972, "index_size": 47245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 390576, "raw_average_key_size": 26, "raw_value_size": 18358121, "raw_average_value_size": 1260, "num_data_blocks": 1769, "num_entries": 14562, "num_filter_entries": 14562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.041178) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 18691694 bytes Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.042968) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.7 rd, 185.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.0 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(39.5) write-amplify(19.7) OK, records in: 15101, records dropped: 539 output_compression: NoCompression Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.042986) EVENT_LOG_v1 {"time_micros": 1764670682042977, "job": 32, "event": "compaction_finished", "compaction_time_micros": 100800, "compaction_time_cpu_micros": 54695, "output_level": 6, "num_output_files": 1, "total_output_size": 18691694, "num_input_records": 15101, "num_output_records": 14562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682043246, "job": 32, "event": "table_file_deletion", "file_number": 56} Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682044999, "job": 32, "event": "table_file_deletion", "file_number": 54} Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:79a413b5-c28c-47ee-83ea-fa37bb286785, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:79a413b5-c28c-47ee-83ea-fa37bb286785, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:02.068+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79a413b5-c28c-47ee-83ea-fa37bb286785' of type subvolume Dec 2 05:18:02 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79a413b5-c28c-47ee-83ea-fa37bb286785' of type subvolume Dec 2 05:18:02 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "force": true, "format": "json"}]: dispatch Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785'' moved to trashcan Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:02 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < "" Dec 2 05:18:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:03.187 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:03.187 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:03.188 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 89 KiB/s wr, 3 op/s Dec 2 05:18:03 localhost podman[239757]: time="2025-12-02T10:18:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:18:03 localhost podman[239757]: @ - - [02/Dec/2025:10:18:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:18:03 localhost podman[239757]: @ - - [02/Dec/2025:10:18:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19264 "" "Go-http-client/1.1" Dec 2 05:18:04 localhost nova_compute[281045]: 2025-12-02 10:18:04.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s Dec 2 05:18:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:18:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:18:06 localhost systemd[1]: tmp-crun.GiX8Wv.mount: Deactivated successfully. Dec 2 05:18:06 localhost podman[326208]: 2025-12-02 10:18:06.10078138 +0000 UTC m=+0.105526034 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:18:06 localhost podman[326208]: 2025-12-02 10:18:06.115039988 +0000 UTC m=+0.119784652 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:18:06 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:18:06 localhost podman[326209]: 2025-12-02 10:18:06.181441992 +0000 UTC m=+0.182447921 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container) Dec 2 05:18:06 localhost podman[326209]: 2025-12-02 10:18:06.198113912 +0000 UTC m=+0.199119841 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:18:06 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:18:06 localhost nova_compute[281045]: 2025-12-02 10:18:06.892 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:18:06 Dec 2 05:18:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:18:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:18:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', 'backups', '.mgr', 'images', 'manila_data', 'vms', 'volumes'] Dec 2 05:18:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:18:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:18:06 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:18:06 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta.tmp' Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta.tmp' to config b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta' Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:07 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:07 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:18:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0023612528789782243 of space, bias 4.0, pg target 1.8795572916666667 quantized to 16 (current 16) Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:18:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:18:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:08 localhost nova_compute[281045]: 2025-12-02 10:18:08.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 82 KiB/s wr, 4 op/s Dec 2 05:18:09 localhost nova_compute[281045]: 2025-12-02 10:18:09.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:09 localhost nova_compute[281045]: 2025-12-02 10:18:09.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:10 localhost nova_compute[281045]: 2025-12-02 10:18:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:10 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Dec 2 05:18:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:10 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s Dec 2 05:18:11 localhost nova_compute[281045]: 2025-12-02 10:18:11.925 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:18:12 localhost systemd[1]: tmp-crun.haXtEk.mount: Deactivated successfully. Dec 2 05:18:12 localhost podman[326251]: 2025-12-02 10:18:12.091041436 +0000 UTC m=+0.093526627 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 2 05:18:12 localhost openstack_network_exporter[241816]: ERROR 10:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:12 localhost openstack_network_exporter[241816]: ERROR 10:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:12 localhost openstack_network_exporter[241816]: ERROR 10:18:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:18:12 localhost openstack_network_exporter[241816]: ERROR 10:18:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:18:12 localhost openstack_network_exporter[241816]: Dec 2 05:18:12 localhost openstack_network_exporter[241816]: ERROR 10:18:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:18:12 localhost openstack_network_exporter[241816]: Dec 2 05:18:12 localhost podman[326251]: 2025-12-02 10:18:12.133861698 +0000 UTC m=+0.136346909 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:12 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:18:12 localhost nova_compute[281045]: 2025-12-02 10:18:12.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 84 KiB/s wr, 4 op/s Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.575 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.576 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.576 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.577 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:18:13 localhost nova_compute[281045]: 2025-12-02 10:18:13.577 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:18:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:13 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:13.682+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc' of type subvolume Dec 2 05:18:13 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc' of type subvolume Dec 2 05:18:13 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "force": true, "format": "json"}]: dispatch Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc'' moved to trashcan Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:13 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < "" Dec 2 05:18:13 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:18:13 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3070259623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.015 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.259 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11395MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.260 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.260 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.459 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.460 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.510 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.518 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.578 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.579 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.595 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.618 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:18:14 localhost nova_compute[281045]: 2025-12-02 10:18:14.637 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:18:15 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:18:15 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3788442662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:18:15 localhost nova_compute[281045]: 2025-12-02 10:18:15.112 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:18:15 localhost nova_compute[281045]: 2025-12-02 10:18:15.118 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:18:15 localhost nova_compute[281045]: 2025-12-02 10:18:15.134 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:18:15 localhost nova_compute[281045]: 2025-12-02 10:18:15.137 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:18:15 localhost nova_compute[281045]: 2025-12-02 10:18:15.137 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:15 localhost ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.139 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.139 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.140 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.157 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.157 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:18:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta.tmp' Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta.tmp' to config b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta' Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:16 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:16 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:16 localhost nova_compute[281045]: 2025-12-02 10:18:16.966 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta.tmp' Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta.tmp' to config b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta' Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:17 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:17 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s Dec 2 05:18:17 localhost nova_compute[281045]: 2025-12-02 10:18:17.542 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:17 localhost nova_compute[281045]: 2025-12-02 10:18:17.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:18:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 4 op/s Dec 2 05:18:19 localhost nova_compute[281045]: 2025-12-02 10:18:19.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:20f292f4-5867-4407-9e49-afe0674f9a28, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:20f292f4-5867-4407-9e49-afe0674f9a28, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:20.744+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20f292f4-5867-4407-9e49-afe0674f9a28' of type subvolume Dec 2 05:18:20 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20f292f4-5867-4407-9e49-afe0674f9a28' of type subvolume Dec 2 05:18:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "force": true, "format": "json"}]: dispatch Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28'' moved to trashcan Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b888758a-b516-4f6f-a2a7-c3912230af77, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b888758a-b516-4f6f-a2a7-c3912230af77, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:20.965+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b888758a-b516-4f6f-a2a7-c3912230af77' of type subvolume Dec 2 05:18:20 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b888758a-b516-4f6f-a2a7-c3912230af77' of type subvolume Dec 2 05:18:20 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "force": true, "format": "json"}]: dispatch Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77'' moved to trashcan Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:20 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < "" Dec 2 05:18:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s Dec 2 05:18:22 localhost nova_compute[281045]: 2025-12-02 10:18:22.000 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:18:22 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4608 writes, 35K keys, 4608 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 4608 writes, 4608 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2487 writes, 13K keys, 2487 commit groups, 1.0 writes per commit group, ingest: 18.30 MB, 0.03 MB/s#012Interval WAL: 2487 writes, 2487 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 123.3 0.30 0.11 16 0.019 0 0 0.0 0.0#012 L6 1/0 17.83 MB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 6.9 170.7 158.7 1.61 0.71 15 0.107 203K 7710 0.0 0.0#012 Sum 1/0 17.83 MB 0.0 0.3 0.0 0.2 0.3 0.1 0.0 7.9 143.8 153.1 1.91 0.81 31 0.062 203K 7710 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 13.4 158.9 160.4 0.93 0.43 16 0.058 114K 4245 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 0.0 170.7 158.7 1.61 0.71 15 0.107 203K 7710 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 124.3 0.30 0.11 15 0.020 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.24 MB/s write, 0.27 GB read, 0.23 MB/s read, 1.9 seconds#012Interval compaction: 0.15 GB write, 0.25 MB/s write, 0.14 GB read, 0.25 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 304.00 MB usage: 22.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000232 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1308,21.44 MB,7.05272%) FilterBlock(31,587.92 KB,0.188863%) IndexBlock(31,756.09 KB,0.242886%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 05:18:22 localhost nova_compute[281045]: 2025-12-02 10:18:22.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 98 KiB/s wr, 4 op/s Dec 2 05:18:24 localhost nova_compute[281045]: 2025-12-02 10:18:24.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s Dec 2 05:18:26 localhost nova_compute[281045]: 2025-12-02 10:18:26.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:26 localhost nova_compute[281045]: 2025-12-02 10:18:26.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:18:27 localhost nova_compute[281045]: 2025-12-02 10:18:27.051 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:27 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "force": true, "format": "json"}]: dispatch Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70'' moved to trashcan Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:27 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < "" Dec 2 05:18:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s Dec 2 05:18:27 localhost nova_compute[281045]: 2025-12-02 10:18:27.452 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:18:27 localhost nova_compute[281045]: 2025-12-02 10:18:27.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 85 KiB/s wr, 4 op/s Dec 2 05:18:29 localhost nova_compute[281045]: 2025-12-02 10:18:29.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e", "force": true, "format": "json"}]: dispatch Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta' Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:30 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "force": true, "format": "json"}]: dispatch Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta' Dec 2 05:18:30 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:18:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:18:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:18:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:18:31 localhost systemd[1]: tmp-crun.H1jPON.mount: Deactivated successfully. Dec 2 05:18:31 localhost systemd[1]: tmp-crun.rl4K4f.mount: Deactivated successfully. Dec 2 05:18:31 localhost podman[326315]: 2025-12-02 10:18:31.095265524 +0000 UTC m=+0.091064971 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:18:31 localhost podman[326322]: 2025-12-02 10:18:31.112877003 +0000 UTC m=+0.099971024 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:18:31 localhost podman[326314]: 2025-12-02 10:18:31.06771067 +0000 UTC m=+0.070684966 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:18:31 localhost podman[326315]: 2025-12-02 10:18:31.132979919 +0000 UTC m=+0.128779356 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:18:31 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:18:31 localhost podman[326314]: 2025-12-02 10:18:31.151250039 +0000 UTC m=+0.154224295 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:18:31 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:18:31 localhost podman[326321]: 2025-12-02 10:18:31.203312494 +0000 UTC m=+0.194552821 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:31 localhost podman[326322]: 2025-12-02 10:18:31.225638538 +0000 UTC m=+0.212732539 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller) Dec 2 05:18:31 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:18:31 localhost podman[326321]: 2025-12-02 10:18:31.243025301 +0000 UTC m=+0.234265598 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 2 05:18:31 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:18:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 53 KiB/s wr, 2 op/s Dec 2 05:18:32 localhost ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3. Dec 2 05:18:32 localhost nova_compute[281045]: 2025-12-02 10:18:32.099 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 71 KiB/s wr, 5 op/s Dec 2 05:18:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ce71e0bd-fac0-489e-baae-8568840b81a1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ce71e0bd-fac0-489e-baae-8568840b81a1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 2 05:18:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:33.439+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce71e0bd-fac0-489e-baae-8568840b81a1' of type subvolume Dec 2 05:18:33 localhost ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce71e0bd-fac0-489e-baae-8568840b81a1' of type subvolume Dec 2 05:18:33 localhost ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "force": true, "format": "json"}]: dispatch Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1'' moved to trashcan Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 05:18:33 localhost ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < "" Dec 2 05:18:33 localhost podman[239757]: time="2025-12-02T10:18:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:18:33 localhost podman[239757]: @ - - [02/Dec/2025:10:18:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:18:33 localhost podman[239757]: @ - - [02/Dec/2025:10:18:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19262 "" "Go-http-client/1.1" Dec 2 05:18:34 localhost nova_compute[281045]: 2025-12-02 10:18:34.586 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s Dec 2 05:18:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:18:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:18:37 localhost nova_compute[281045]: 2025-12-02 10:18:37.000 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:18:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:18:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:18:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:18:37 localhost podman[326394]: 2025-12-02 10:18:37.08458053 +0000 UTC m=+0.080699623 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9) Dec 2 05:18:37 localhost podman[326394]: 2025-12-02 10:18:37.118816819 +0000 UTC m=+0.114935922 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container) Dec 2 05:18:37 localhost nova_compute[281045]: 2025-12-02 10:18:37.136 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:37 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:18:37 localhost systemd[1]: tmp-crun.y4PCD8.mount: Deactivated successfully. Dec 2 05:18:37 localhost podman[326393]: 2025-12-02 10:18:37.169714898 +0000 UTC m=+0.171122974 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:18:37 localhost podman[326393]: 2025-12-02 10:18:37.178672503 +0000 UTC m=+0.180080579 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:18:37 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:18:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s Dec 2 05:18:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:38 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e285 e285: 6 total, 6 up, 6 in Dec 2 05:18:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s Dec 2 05:18:39 localhost nova_compute[281045]: 2025-12-02 10:18:39.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:39 localhost podman[326580]: Dec 2 05:18:39 localhost podman[326580]: 2025-12-02 10:18:39.920283463 +0000 UTC m=+0.062971481 container create bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, release=1763362218, description=Red Hat Ceph Storage 7) Dec 2 05:18:39 localhost systemd[1]: Started libpod-conmon-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope. Dec 2 05:18:39 localhost systemd[1]: Started libcrun container. Dec 2 05:18:39 localhost podman[326580]: 2025-12-02 10:18:39.987809872 +0000 UTC m=+0.130497920 container init bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., version=7) Dec 2 05:18:39 localhost podman[326580]: 2025-12-02 10:18:39.892035408 +0000 UTC m=+0.034723486 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 05:18:40 localhost podman[326580]: 2025-12-02 10:18:40.000643726 +0000 UTC m=+0.143331794 container start bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 05:18:40 localhost podman[326580]: 2025-12-02 10:18:40.000916794 +0000 UTC m=+0.143604862 container attach bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main) Dec 2 05:18:40 localhost condescending_torvalds[326595]: 167 167 Dec 2 05:18:40 localhost systemd[1]: libpod-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope: Deactivated successfully. Dec 2 05:18:40 localhost podman[326580]: 2025-12-02 10:18:40.00864459 +0000 UTC m=+0.151332638 container died bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 05:18:40 localhost podman[326600]: 2025-12-02 10:18:40.102127955 +0000 UTC m=+0.080789687 container remove bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, distribution-scope=public) Dec 2 05:18:40 localhost systemd[1]: libpod-conmon-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope: Deactivated successfully. Dec 2 05:18:40 localhost podman[326622]: Dec 2 05:18:40 localhost podman[326622]: 2025-12-02 10:18:40.317505964 +0000 UTC m=+0.068271814 container create 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 05:18:40 localhost systemd[1]: Started libpod-conmon-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope. Dec 2 05:18:40 localhost systemd[1]: Started libcrun container. Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost podman[326622]: 2025-12-02 10:18:40.382931788 +0000 UTC m=+0.133697648 container init 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4) Dec 2 05:18:40 localhost podman[326622]: 2025-12-02 10:18:40.294017983 +0000 UTC m=+0.044783823 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 05:18:40 localhost podman[326622]: 2025-12-02 10:18:40.393943926 +0000 UTC m=+0.144709776 container start 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public) Dec 2 05:18:40 localhost podman[326622]: 2025-12-02 10:18:40.394251955 +0000 UTC m=+0.145017855 container attach 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main) Dec 2 05:18:40 localhost systemd[1]: var-lib-containers-storage-overlay-705f7264708481aa281e3a4f25153233325637f492ccc70e43236280f73bc22e-merged.mount: Deactivated successfully. Dec 2 05:18:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s Dec 2 05:18:41 localhost agitated_leavitt[326637]: [ Dec 2 05:18:41 localhost agitated_leavitt[326637]: { Dec 2 05:18:41 localhost agitated_leavitt[326637]: "available": false, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "ceph_device": false, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "lsm_data": {}, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "lvs": [], Dec 2 05:18:41 localhost agitated_leavitt[326637]: "path": "/dev/sr0", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "rejected_reasons": [ Dec 2 05:18:41 localhost agitated_leavitt[326637]: "Insufficient space (<5GB)", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "Has a FileSystem" Dec 2 05:18:41 localhost agitated_leavitt[326637]: ], Dec 2 05:18:41 localhost agitated_leavitt[326637]: "sys_api": { Dec 2 05:18:41 localhost agitated_leavitt[326637]: "actuators": null, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "device_nodes": "sr0", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "human_readable_size": "482.00 KB", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "id_bus": "ata", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "model": "QEMU DVD-ROM", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "nr_requests": "2", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "partitions": {}, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "path": "/dev/sr0", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "removable": "1", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "rev": "2.5+", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "ro": "0", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "rotational": "1", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "sas_address": "", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "sas_device_handle": "", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "scheduler_mode": "mq-deadline", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "sectors": 0, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "sectorsize": "2048", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "size": 493568.0, Dec 2 05:18:41 localhost agitated_leavitt[326637]: "support_discard": "0", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "type": "disk", Dec 2 05:18:41 localhost agitated_leavitt[326637]: "vendor": "QEMU" Dec 2 05:18:41 localhost agitated_leavitt[326637]: } Dec 2 05:18:41 localhost agitated_leavitt[326637]: } Dec 2 05:18:41 localhost agitated_leavitt[326637]: ] Dec 2 05:18:41 localhost systemd[1]: libpod-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Deactivated successfully. Dec 2 05:18:41 localhost systemd[1]: libpod-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Consumed 1.175s CPU time. Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 05:18:41 localhost podman[328743]: 2025-12-02 10:18:41.583599095 +0000 UTC m=+0.039274984 container died 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64) Dec 2 05:18:41 localhost systemd[1]: var-lib-containers-storage-overlay-39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798-merged.mount: Deactivated successfully. Dec 2 05:18:41 localhost podman[328743]: 2025-12-02 10:18:41.626048116 +0000 UTC m=+0.081723995 container remove 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, release=1763362218, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:18:41 localhost systemd[1]: libpod-conmon-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Deactivated successfully. Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:18:41 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:18:41 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:18:41 localhost ceph-mgr[287188]: [progress INFO root] Completed event d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:18:41 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:18:41 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:18:42 localhost openstack_network_exporter[241816]: ERROR 10:18:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:18:42 localhost openstack_network_exporter[241816]: ERROR 10:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:42 localhost openstack_network_exporter[241816]: ERROR 10:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:42 localhost openstack_network_exporter[241816]: ERROR 10:18:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:18:42 localhost openstack_network_exporter[241816]: Dec 2 05:18:42 localhost openstack_network_exporter[241816]: ERROR 10:18:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:18:42 localhost openstack_network_exporter[241816]: Dec 2 05:18:42 localhost nova_compute[281045]: 2025-12-02 10:18:42.143 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:42 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:18:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:18:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:18:43 localhost podman[328776]: 2025-12-02 10:18:43.085879844 +0000 UTC m=+0.085149810 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:18:43 localhost podman[328776]: 2025-12-02 10:18:43.127965123 +0000 UTC m=+0.127235129 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:43 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:18:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s Dec 2 05:18:43 localhost nova_compute[281045]: 2025-12-02 10:18:43.559 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:43.561 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:18:43 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:43.561 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:18:44 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:44.219 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:44Z, description=, device_id=4ee86722-429a-4a47-a912-a41ad8c5f9ac, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=739db3f1-5def-4a06-ae92-802b21657418, ip_allocation=immediate, mac_address=fa:16:3e:ea:99:78, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3950, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:44Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:18:44 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:18:44 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:18:44 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:18:44 localhost podman[328811]: 2025-12-02 10:18:44.439037273 +0000 UTC m=+0.060360981 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:18:44 localhost nova_compute[281045]: 2025-12-02 10:18:44.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:44 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:44.715 262347 INFO neutron.agent.dhcp.agent [None req-294e6647-79bd-4175-8bef-b2d1ac48c3e1 - - - - - -] DHCP configuration for ports {'739db3f1-5def-4a06-ae92-802b21657418'} is completed#033[00m Dec 2 05:18:45 localhost nova_compute[281045]: 2025-12-02 10:18:45.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s Dec 2 05:18:46 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:46.913 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:46Z, description=, device_id=ff064be5-371a-4a89-8fb9-b1f5eb5224da, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=421cc112-f5c1-4e29-a229-2aafbf7f14ae, ip_allocation=immediate, mac_address=fa:16:3e:ac:f5:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3953, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:46Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:18:46 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 e286: 6 total, 6 up, 6 in Dec 2 05:18:47 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:18:47 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:18:47 localhost podman[328850]: 2025-12-02 10:18:47.070749976 +0000 UTC m=+0.035863720 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:18:47 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:18:47 localhost nova_compute[281045]: 2025-12-02 10:18:47.144 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 219 B/s rd, 3.3 KiB/s wr, 1 op/s Dec 2 05:18:47 localhost ovn_metadata_agent[159477]: 2025-12-02 10:18:47.563 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:18:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:47 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:47.605 262347 INFO neutron.agent.dhcp.agent [None req-cddefb12-04f4-4abc-ac82-98c814ae2ab0 - - - - - -] DHCP configuration for ports {'421cc112-f5c1-4e29-a229-2aafbf7f14ae'} is completed#033[00m Dec 2 05:18:48 localhost nova_compute[281045]: 2025-12-02 10:18:48.145 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s Dec 2 05:18:49 localhost nova_compute[281045]: 2025-12-02 10:18:49.594 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:50 localhost nova_compute[281045]: 2025-12-02 10:18:50.111 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s Dec 2 05:18:52 localhost nova_compute[281045]: 2025-12-02 10:18:52.145 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:52 localhost nova_compute[281045]: 2025-12-02 10:18:52.236 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:18:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:54.140 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:53Z, description=, device_id=146339f0-3e49-419f-a49c-241664c75695, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=06e59d5f-dec2-4d76-9835-8a7968e9ba35, ip_allocation=immediate, mac_address=fa:16:3e:fd:06:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3967, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:53Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04#033[00m Dec 2 05:18:54 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses Dec 2 05:18:54 localhost podman[328886]: 2025-12-02 10:18:54.350289554 +0000 UTC m=+0.054422179 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:18:54 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:18:54 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:18:54 localhost neutron_dhcp_agent[262343]: 2025-12-02 10:18:54.574 262347 INFO neutron.agent.dhcp.agent [None req-5485d8ae-b7be-41d2-b1e7-1ebe1f110f0e - - - - - -] DHCP configuration for ports {'06e59d5f-dec2-4d76-9835-8a7968e9ba35'} is completed#033[00m Dec 2 05:18:54 localhost nova_compute[281045]: 2025-12-02 10:18:54.610 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:18:55 localhost nova_compute[281045]: 2025-12-02 10:18:55.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:57 localhost nova_compute[281045]: 2025-12-02 10:18:57.147 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:18:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:57 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses Dec 2 05:18:57 localhost podman[328922]: 2025-12-02 10:18:57.679556819 +0000 UTC m=+0.056683398 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:18:57 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:18:57 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:18:57 localhost nova_compute[281045]: 2025-12-02 10:18:57.800 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:18:59 localhost nova_compute[281045]: 2025-12-02 10:18:59.613 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:00 localhost nova_compute[281045]: 2025-12-02 10:19:00.164 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:01 localhost nova_compute[281045]: 2025-12-02 10:19:01.966 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:19:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:19:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:19:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:19:02 localhost podman[328944]: 2025-12-02 10:19:02.09850007 +0000 UTC m=+0.095695632 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:19:02 localhost podman[328944]: 2025-12-02 10:19:02.131878393 +0000 UTC m=+0.129073915 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:02 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:19:02 localhost nova_compute[281045]: 2025-12-02 10:19:02.167 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:02 localhost podman[328945]: 2025-12-02 10:19:02.16933166 +0000 UTC m=+0.160825258 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:19:02 localhost podman[328945]: 2025-12-02 10:19:02.212003267 +0000 UTC m=+0.203496895 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:19:02 localhost podman[328952]: 2025-12-02 10:19:02.227578275 +0000 UTC m=+0.209941643 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:19:02 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:19:02 localhost podman[328952]: 2025-12-02 10:19:02.266702974 +0000 UTC m=+0.249066392 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller) Dec 2 05:19:02 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:19:02 localhost podman[328946]: 2025-12-02 10:19:02.35306477 +0000 UTC m=+0.339647038 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:19:02 localhost podman[328946]: 2025-12-02 10:19:02.387302949 +0000 UTC m=+0.373885297 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:19:02 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:19:02 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:03 localhost systemd[1]: tmp-crun.cYAAiK.mount: Deactivated successfully. Dec 2 05:19:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:19:03.188 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:19:03.189 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:03 localhost ovn_metadata_agent[159477]: 2025-12-02 10:19:03.189 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:03 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:03 localhost podman[239757]: time="2025-12-02T10:19:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:19:03 localhost podman[239757]: @ - - [02/Dec/2025:10:19:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:19:03 localhost podman[239757]: @ - - [02/Dec/2025:10:19:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19257 "" "Go-http-client/1.1" Dec 2 05:19:03 localhost nova_compute[281045]: 2025-12-02 10:19:03.918 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:03 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses Dec 2 05:19:03 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:19:03 localhost podman[329045]: 2025-12-02 10:19:03.969519076 +0000 UTC m=+0.063840897 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:19:03 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:19:04 localhost nova_compute[281045]: 2025-12-02 10:19:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:05 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:06 localhost nova_compute[281045]: 2025-12-02 10:19:06.916 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:06 localhost ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:19:06 Dec 2 05:19:06 localhost ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 05:19:06 localhost ceph-mgr[287188]: [balancer INFO root] do_upmap Dec 2 05:19:06 localhost ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'manila_metadata', 'volumes', '.mgr', 'manila_data', 'images', 'backups'] Dec 2 05:19:06 localhost ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes Dec 2 05:19:06 localhost dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses Dec 2 05:19:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host Dec 2 05:19:06 localhost dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts Dec 2 05:19:06 localhost podman[329083]: 2025-12-02 10:19:06.968625945 +0000 UTC m=+0.061240107 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:19:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:06 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:19:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:19:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:07 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:19:07 localhost nova_compute[281045]: 2025-12-02 10:19:07.219 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:07 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 05:19:07 localhost ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002499749179927415 of space, bias 4.0, pg target 1.9898003472222223 quantized to 16 (current 16) Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:19:07 localhost ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 05:19:07 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:19:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:19:08 localhost podman[329105]: 2025-12-02 10:19:08.080995797 +0000 UTC m=+0.079201617 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:19:08 localhost podman[329105]: 2025-12-02 10:19:08.092782588 +0000 UTC m=+0.090988388 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:19:08 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:19:08 localhost podman[329106]: 2025-12-02 10:19:08.135327302 +0000 UTC m=+0.130266892 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public) Dec 2 05:19:08 localhost podman[329106]: 2025-12-02 10:19:08.14605337 +0000 UTC m=+0.140992980 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:19:08 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:19:09 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:09 localhost nova_compute[281045]: 2025-12-02 10:19:09.545 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:09 localhost nova_compute[281045]: 2025-12-02 10:19:09.546 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:09 localhost nova_compute[281045]: 2025-12-02 10:19:09.681 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:11 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:11 localhost nova_compute[281045]: 2025-12-02 10:19:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:12 localhost openstack_network_exporter[241816]: ERROR 10:19:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:19:12 localhost openstack_network_exporter[241816]: ERROR 10:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:12 localhost openstack_network_exporter[241816]: ERROR 10:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:12 localhost openstack_network_exporter[241816]: ERROR 10:19:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:19:12 localhost openstack_network_exporter[241816]: Dec 2 05:19:12 localhost openstack_network_exporter[241816]: ERROR 10:19:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:19:12 localhost openstack_network_exporter[241816]: Dec 2 05:19:12 localhost nova_compute[281045]: 2025-12-02 10:19:12.220 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:12 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:13 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:13 localhost nova_compute[281045]: 2025-12-02 10:19:13.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:19:14 localhost podman[329147]: 2025-12-02 10:19:14.046591777 +0000 UTC m=+0.057733520 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:14 localhost podman[329147]: 2025-12-02 10:19:14.086964193 +0000 UTC m=+0.098105966 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:19:14 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:19:14 localhost nova_compute[281045]: 2025-12-02 10:19:14.684 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:15 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.545 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.546 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.566 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:19:15 localhost nova_compute[281045]: 2025-12-02 10:19:15.566 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:19:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:19:16 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3458857762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.026 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.231 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11372MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.232 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.291 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.294 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.308 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:19:16 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:19:16 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3626538401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.723 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.728 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.742 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.744 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:19:16 localhost nova_compute[281045]: 2025-12-02 10:19:16.744 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:17 localhost nova_compute[281045]: 2025-12-02 10:19:17.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:17 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:17 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:17 localhost nova_compute[281045]: 2025-12-02 10:19:17.726 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:17 localhost nova_compute[281045]: 2025-12-02 10:19:17.726 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:18 localhost nova_compute[281045]: 2025-12-02 10:19:18.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:18 localhost nova_compute[281045]: 2025-12-02 10:19:18.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:19:19 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:19 localhost nova_compute[281045]: 2025-12-02 10:19:19.717 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:21 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:22 localhost nova_compute[281045]: 2025-12-02 10:19:22.317 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:22 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:23 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:24 localhost nova_compute[281045]: 2025-12-02 10:19:24.755 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:25 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:27 localhost nova_compute[281045]: 2025-12-02 10:19:27.362 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:27 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:27 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:29 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:29 localhost nova_compute[281045]: 2025-12-02 10:19:29.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:31 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:32 localhost nova_compute[281045]: 2025-12-02 10:19:32.411 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:32 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1. Dec 2 05:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0. Dec 2 05:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b. Dec 2 05:19:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf. Dec 2 05:19:33 localhost podman[329213]: 2025-12-02 10:19:33.089372037 +0000 UTC m=+0.080323461 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:19:33 localhost podman[329213]: 2025-12-02 10:19:33.100356014 +0000 UTC m=+0.091307438 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:33 localhost systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully. Dec 2 05:19:33 localhost systemd[1]: tmp-crun.DPwGcr.mount: Deactivated successfully. Dec 2 05:19:33 localhost podman[329214]: 2025-12-02 10:19:33.165001634 +0000 UTC m=+0.150779310 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:33 localhost podman[329211]: 2025-12-02 10:19:33.199607995 +0000 UTC m=+0.196579704 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:33 localhost podman[329211]: 2025-12-02 10:19:33.208757135 +0000 UTC m=+0.205728844 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:19:33 localhost systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully. Dec 2 05:19:33 localhost podman[329212]: 2025-12-02 10:19:33.250430032 +0000 UTC m=+0.244844163 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:19:33 localhost podman[329212]: 2025-12-02 10:19:33.259449148 +0000 UTC m=+0.253863279 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:19:33 localhost systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully. Dec 2 05:19:33 localhost podman[329214]: 2025-12-02 10:19:33.280338928 +0000 UTC m=+0.266116604 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 2 05:19:33 localhost systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully. Dec 2 05:19:33 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:33 localhost podman[239757]: time="2025-12-02T10:19:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:19:33 localhost podman[239757]: @ - - [02/Dec/2025:10:19:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1" Dec 2 05:19:33 localhost podman[239757]: @ - - [02/Dec/2025:10:19:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 2 05:19:34 localhost nova_compute[281045]: 2025-12-02 10:19:34.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:35 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:35 localhost sshd[329292]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:19:36 localhost systemd-logind[760]: New session 74 of user zuul. Dec 2 05:19:36 localhost systemd[1]: Started Session 74 of User zuul. Dec 2 05:19:36 localhost python3[329314]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-cf35-eb71-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 05:19:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 2 05:19:36 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 05:19:37 localhost ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.071573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777071636, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1474, "num_deletes": 253, "total_data_size": 2571981, "memory_usage": 2646064, "flush_reason": "Manual Compaction"} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777081892, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1691608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35575, "largest_seqno": 37044, "table_properties": {"data_size": 1685803, "index_size": 3083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13812, "raw_average_key_size": 21, "raw_value_size": 1673636, "raw_average_value_size": 2570, "num_data_blocks": 131, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670681, "oldest_key_time": 1764670681, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10358 microseconds, and 3868 cpu microseconds. Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081935) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1691608 bytes OK Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081957) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083505) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083518) EVENT_LOG_v1 {"time_micros": 1764670777083514, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2564890, prev total WAL file size 2564890, number of live WAL files 2. Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.084131) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1651KB)], [57(17MB)] Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777084188, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 20383302, "oldest_snapshot_seqno": -1} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14677 keys, 19065285 bytes, temperature: kUnknown Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777181544, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 19065285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18980345, "index_size": 47143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36741, "raw_key_size": 393590, "raw_average_key_size": 26, "raw_value_size": 18730054, "raw_average_value_size": 1276, "num_data_blocks": 1761, "num_entries": 14677, "num_filter_entries": 14677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.181807) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 19065285 bytes Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.184679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.2 rd, 195.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 17.8 +0.0 blob) out(18.2 +0.0 blob), read-write-amplify(23.3) write-amplify(11.3) OK, records in: 15213, records dropped: 536 output_compression: NoCompression Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.184705) EVENT_LOG_v1 {"time_micros": 1764670777184694, "job": 34, "event": "compaction_finished", "compaction_time_micros": 97426, "compaction_time_cpu_micros": 46086, "output_level": 6, "num_output_files": 1, "total_output_size": 19065285, "num_input_records": 15213, "num_output_records": 14677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777185024, "job": 34, "event": "table_file_deletion", "file_number": 59} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777187487, "job": 34, "event": "table_file_deletion", "file_number": 57} Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.084033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:37 localhost nova_compute[281045]: 2025-12-02 10:19:37.466 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:37 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:37 localhost ovn_controller[153778]: 2025-12-02T10:19:37Z|00241|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory Dec 2 05:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6. Dec 2 05:19:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be. Dec 2 05:19:38 localhost podman[329317]: 2025-12-02 10:19:38.6017252 +0000 UTC m=+0.094230819 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:19:38 localhost podman[329318]: 2025-12-02 10:19:38.650478763 +0000 UTC m=+0.138593937 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 05:19:38 localhost podman[329318]: 2025-12-02 10:19:38.661994547 +0000 UTC m=+0.150109711 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Dec 2 05:19:38 localhost systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully. Dec 2 05:19:38 localhost podman[329317]: 2025-12-02 10:19:38.717999443 +0000 UTC m=+0.210505062 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:19:38 localhost systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully. Dec 2 05:19:39 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:39 localhost nova_compute[281045]: 2025-12-02 10:19:39.872 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:41 localhost systemd[1]: session-74.scope: Deactivated successfully. Dec 2 05:19:41 localhost systemd-logind[760]: Session 74 logged out. Waiting for processes to exit. Dec 2 05:19:41 localhost systemd-logind[760]: Removed session 74. Dec 2 05:19:41 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:42 localhost openstack_network_exporter[241816]: ERROR 10:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:42 localhost openstack_network_exporter[241816]: ERROR 10:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:42 localhost openstack_network_exporter[241816]: ERROR 10:19:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:19:42 localhost openstack_network_exporter[241816]: ERROR 10:19:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:19:42 localhost openstack_network_exporter[241816]: Dec 2 05:19:42 localhost openstack_network_exporter[241816]: ERROR 10:19:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:19:42 localhost openstack_network_exporter[241816]: Dec 2 05:19:42 localhost nova_compute[281045]: 2025-12-02 10:19:42.505 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 05:19:42 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 05:19:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 05:19:42 localhost ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:19:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 05:19:42 localhost ceph-mgr[287188]: [progress INFO root] update: starting ev d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:19:42 localhost ceph-mgr[287188]: [progress INFO root] complete: finished ev d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3)) Dec 2 05:19:42 localhost ceph-mgr[287188]: [progress INFO root] Completed event d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 05:19:42 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 05:19:42 localhost ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 05:19:43 localhost ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:19:43 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:19:43 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:44 localhost nova_compute[281045]: 2025-12-02 10:19:44.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e. Dec 2 05:19:45 localhost podman[329447]: 2025-12-02 10:19:45.099517445 +0000 UTC m=+0.094246949 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 2 05:19:45 localhost podman[329447]: 2025-12-02 10:19:45.113973638 +0000 UTC m=+0.108703182 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:19:45 localhost systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully. Dec 2 05:19:45 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:46 localhost sshd[329466]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:19:47 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:47 localhost ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events Dec 2 05:19:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 05:19:47 localhost nova_compute[281045]: 2025-12-02 10:19:47.508 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:47 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:48 localhost ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:19:49 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 2 05:19:49 localhost nova_compute[281045]: 2025-12-02 10:19:49.957 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:51 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v790: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Dec 2 05:19:52 localhost nova_compute[281045]: 2025-12-02 10:19:52.545 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:52 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:53 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Dec 2 05:19:55 localhost nova_compute[281045]: 2025-12-02 10:19:55.010 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:55 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:57 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:19:57 localhost ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:57 localhost nova_compute[281045]: 2025-12-02 10:19:57.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:59 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:20:00 localhost nova_compute[281045]: 2025-12-02 10:20:00.042 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:20:00 localhost ceph-mon[301710]: overall HEALTH_OK Dec 2 05:20:01 localhost ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 2 05:20:02 localhost sshd[329469]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:20:02 localhost systemd-logind[760]: New session 75 of user zuul. Dec 2 05:20:02 localhost systemd[1]: Started Session 75 of User zuul.